path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
DIP_PA4_2616298.ipynb | ###Markdown
Digital Image Processing - Programming Assignment \4The following progamming assignment involves image enhancement tasks in spatial and frequency domain. The deadline for returning your work is **April 18th, 2019 at 23:59. Please, follow carefully the submission instructions given in the end of this notebook.** You are encouraged to seek information in other places than the course book and lecture material but remember **list all your sources under references**.If you experience problems that you cannot solve using the course material, or related Python documentation, or have any questions regarding to the programming assignments in general, please **do not hesitate to contact the course assistant** by e-mail at address `[email protected]`. **Please, fill in your personal details below.** Personal details:* **Name(s) and student ID(s):** `Berke Esmer - 2616298`* **Contact information:** `[email protected]` 4. Image enhancement in spatial domainThe gray-scale images `cameraman_noise1.tif` and `cameraman_noise2.tif` and the binary image `logo_noise3.png` contain different types of noise. Your task is to perform image enhancement in spatial domain so that the noise in all three images is reduced. Please note that you cannot to restore the original image (i. e. remove the noise completely). For instance, __[`scipy.ndimage`](https://docs.scipy.org/doc/scipy/reference/ndimage.html)__ and __[`scipy.signal`](https://docs.scipy.org/doc/scipy/reference/signal.html)__ packages provide useful tools for filtering the noise types. Additive Gaussian noiseThe image `cameraman_noise1.tif` suffers from additive Gaussian noise:
###Code
# read image the original 'cameraman.tif' and its noisy version 'cameraman_noise1.tif'
orig = io.imread('cameraman.tif').astype('int32')
noisy1 = io.imread('cameraman_noise1.tif')
# extract the additive noise from the noisy image by subtracting the original image from the noisy one
noise1 = noisy1.astype('int32') - orig
# display the noisy image, noise and histogram of the noise
fig, ax = plt.subplots(1, 3, figsize=(15,5))
ax[0].imshow(noisy1, vmin=0, vmax=255, cmap=plt.get_cmap('gray'))
ax[0].set_title('cameraman_noise1')
ax[0].axis('off')
ax[1].imshow(noise1, cmap=plt.get_cmap('gray'))
ax[1].set_title('noise1')
ax[1].axis('off')
ax[2].hist(noise1.flatten(), bins=30, fc='black')
ax[2].set_title('Histogram of noise1')
fig.tight_layout()
###Output
_____no_output_____
###Markdown
**4.1. Perform image enhancement on the `cameraman_noise1.tif` image using a `3x3` mean filter and compute the root mean squared error (RMSE) with the original image before and after filtering the noise. Then, display the noisy, enhanced and original image in the same figure.**Hint: You can perform the filtering by first constructing the `3x3` mean filter mask (`NumPy array`) and then convolving the image with it using e.g. __[`scipy.signal.convolve2d()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.convolve2d.html)__ function. Please note the __[difference in (integer) division between Python versions 2 and 3](https://stackoverflow.com/questions/21316968/division-in-python-2-7-and-3-3)__.
###Code
from scipy import signal
# construct mean filter mask
meanFilter = np.array([[1/9, 1/9, 1/9], [1/9, 1/9, 1/9], [1/9, 1/9, 1/9]]) # 1/9 is a valid division in Python3
# convolve the noisy image with the constructed filter mask
enhancedImage = signal.convolve2d(noisy1, meanFilter, mode='same') # mode = "same" is necessary for RMSE values
# display the noisy, enhanced and original images
fig, ax = plt.subplots(1, 3, figsize=(15,5))
ax[0].imshow(noisy1, cmap=plt.get_cmap('gray'))
ax[0].set_title('Noisy Image')
ax[1].imshow(enhancedImage, cmap=plt.get_cmap('gray'))
ax[1].set_title('Enhanced Image')
ax[2].imshow(orig, cmap=plt.get_cmap('gray'))
ax[2].set_title('Original Image')
fig.tight_layout()
# print RMSE before enhancement
RMSE_BEFORE = np.array(orig - noisy1)
RMSE_BEFORE = RMSE_BEFORE * RMSE_BEFORE
RMSE_BEFORE = RMSE_BEFORE.sum() / (256 * 256) # mean value
RMSE_BEFORE = np.sqrt(RMSE_BEFORE)
print("Before: ", RMSE_BEFORE)
# print RMSE after enhancement
RMSE_AFTER = np.array(orig - enhancedImage)
RMSE_AFTER = RMSE_AFTER * RMSE_AFTER
RMSE_AFTER = RMSE_AFTER.sum() / (256 * 256) # mean value
RMSE_AFTER = np.sqrt(RMSE_AFTER)
print("After: ", RMSE_AFTER)
###Output
_____no_output_____
###Markdown
**4.2. Perform image enhancement on the `cameraman_noise1.tif` image a `3x3` __[median filter](https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.median_filter.htmlscipy.ndimage.median_filter)__ and compute the RMSE with the original image before and after filtering the noise. Then, display the noisy, enhanced and original image in the same figure.**
###Code
from scipy.ndimage import median_filter
# apply 3x3 median filter on the noisy image image
enhancedMedian = median_filter(noisy1, size = (3,3)) # Median filter 3x3
# display the noisy, enhanced and original images
fig, ax = plt.subplots(1, 3, figsize=(15,5))
ax[0].imshow(noisy1, cmap=plt.get_cmap('gray'))
ax[0].set_title('Noisy Image')
ax[1].imshow(enhancedMedian, cmap=plt.get_cmap('gray'))
ax[1].set_title('Enhanced Image')
ax[2].imshow(orig, cmap=plt.get_cmap('gray'))
ax[2].set_title('Original Image')
fig.tight_layout()
# print RMSE before enhancement
RMSE_BEFORE = np.array(orig - noisy1)
RMSE_BEFORE = RMSE_BEFORE * RMSE_BEFORE
RMSE_BEFORE = RMSE_BEFORE.sum() / (256 * 256) # mean value
RMSE_BEFORE = np.sqrt(RMSE_BEFORE)
print("Before: ", RMSE_BEFORE)
# print RMSE after enhancement
RMSE_AFTER = np.array(orig - enhancedMedian)
RMSE_AFTER = RMSE_AFTER * RMSE_AFTER
RMSE_AFTER = RMSE_AFTER.sum() / (256 * 256) # mean value
RMSE_AFTER = np.sqrt(RMSE_AFTER)
print("After: ", RMSE_AFTER)
###Output
_____no_output_____
###Markdown
**4.3. Perform image enhancement on the `cameraman_noise1.tif` image using a `5x5` __[Wiener filter](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.wiener.html)__ and compute the RMSE with the original image before and after filtering the noise. Then, display the noisy, enhanced and original image in the same figure. Please note that you need to convert the input image into `float64` using `astype('float64')` before applying __[`scipy.signal.wiener()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.wiener.html)__ function!**
###Code
# apply 5x5 Wiener filter on the noisy image
# first convert the input image to float64 using 'astype('float64')'!
noisyAsFloat64 = noisy1.astype('float64')
enhancedWiener = signal.wiener(noisyAsFloat64, mysize = (5,5)) # 5x5 Wiener filter
# display the noisy, enhanced and original images
fig, ax = plt.subplots(1, 3, figsize=(15,5))
ax[0].imshow(noisy1, cmap=plt.get_cmap('gray'))
ax[0].set_title('Noisy Image')
ax[1].imshow(enhancedWiener, cmap=plt.get_cmap('gray'))
ax[1].set_title('Enhanced Image')
ax[2].imshow(orig, cmap=plt.get_cmap('gray'))
ax[2].set_title('Original Image')
fig.tight_layout()
# print RMSE before enhancement
RMSE_BEFORE = np.array(orig - noisy1)
RMSE_BEFORE = RMSE_BEFORE * RMSE_BEFORE
RMSE_BEFORE = RMSE_BEFORE.sum() / (256 * 256) # mean value
RMSE_BEFORE = np.sqrt(RMSE_BEFORE)
print("Before: ", RMSE_BEFORE)
# print RMSE after enhancement
RMSE_AFTER = np.array(orig - enhancedWiener)
RMSE_AFTER = RMSE_AFTER * RMSE_AFTER
RMSE_AFTER = RMSE_AFTER.sum() / (256 * 256) # mean value
RMSE_AFTER = np.sqrt(RMSE_AFTER)
print("After: ", RMSE_AFTER)
###Output
_____no_output_____
###Markdown
**4.4. Finally, display the three images obtained with mean, median and Wiener filters in the same figure.**
###Code
# display the mean, median and Wiener filtered images
fig, ax = plt.subplots(1, 3, figsize=(15,5))
ax[0].imshow(enhancedImage, cmap=plt.get_cmap('gray'))
ax[0].set_title('Enhanced Mean Filtered Image')
ax[1].imshow(enhancedMedian, cmap=plt.get_cmap('gray'))
ax[1].set_title('Enhanced Median Filtered Image')
ax[2].imshow(enhancedWiener, cmap=plt.get_cmap('gray'))
ax[2].set_title('Enhanced Wiener Filtered Image')
fig.tight_layout()
###Output
_____no_output_____
###Markdown
**Which method gave the best result? Why??**`It is clear that wiener filter removed the noise at best. It made the picture more smooth. Weiner filters are far and away the most common deblurring technique used because it mathematically returns the best results.[4]` Salt-and-pepper noiseThe image `cameraman_noise2.tif` suffers from salt-and-pepper noise:
###Code
# read the 'cameraman_noise2.tif' image
noisy2 = io.imread('cameraman_noise2.tif')
# extract additive noise2
noise2 = noisy2.astype('int32') - orig
# display the noisy image and additive noise
fig, ax = plt.subplots(1, 2)
ax[0].imshow(noisy2, vmin=0, vmax=255, cmap=plt.get_cmap('gray'))
ax[0].set_title('cameraman_noise2')
ax[0].axis('off')
ax[1].imshow(noise2, cmap=plt.get_cmap('gray'))
ax[1].set_title('noise2')
ax[1].axis('off')
fig.tight_layout()
###Output
_____no_output_____
###Markdown
**4.5. Utilizing your knowledge in image enhancement, choose a proper filter for reducing the noise in the `cameraman_noise2.tif` image and compute the RMSE with the original image before and after filtering the noise. Then, display the noisy, enhanced and original image in the same figure.**
###Code
# reduce the noise with the method of your choice
## I tried all of them and found median as best ##
enhancedMedian2 = median_filter(noisy2, size = (3,3)) # Median filter 3x3
# display the noisy, enhanced and original images
fig, ax = plt.subplots(1, 3, figsize=(15,5))
ax[0].imshow(noisy2, cmap=plt.get_cmap('gray'))
ax[0].set_title('Noisy Image')
ax[1].imshow(enhancedMedian2, cmap=plt.get_cmap('gray'))
ax[1].set_title('Enhanced Image')
ax[2].imshow(orig, cmap=plt.get_cmap('gray'))
ax[2].set_title('Original Image')
fig.tight_layout()
# print RMSE before enhancement
RMSE_BEFORE = np.array(orig - noisy2)
RMSE_BEFORE = RMSE_BEFORE * RMSE_BEFORE
RMSE_BEFORE = RMSE_BEFORE.sum() / (256 * 256) # mean value
RMSE_BEFORE = np.sqrt(RMSE_BEFORE)
print("Before: ", RMSE_BEFORE)
# print RMSE after enhancement
RMSE_AFTER = np.array(orig - enhancedMedian2)
RMSE_AFTER = RMSE_AFTER * RMSE_AFTER
RMSE_AFTER = RMSE_AFTER.sum() / (256 * 256) # mean value
RMSE_AFTER = np.sqrt(RMSE_AFTER)
print("After: ", RMSE_AFTER)
###Output
_____no_output_____
###Markdown
The binary image `logo_noise3.png` suffers from salt-and-pepper noise as well:
###Code
# read 'logo_noise3.png' as binary image
noisy3 = io.imread('logo_noise3.png').astype('bool_')
# display the noisy binary image
fig, ax = plt.subplots(figsize=(10,7))
ax.imshow(noisy3, cmap=plt.get_cmap('gray'))
ax.set_title('logo_noise3')
ax.axis('off')
fig.tight_layout()
###Output
_____no_output_____
###Markdown
**4.6. Again, utilizing your knowledge in image enhancement, find a way for reducing the noise in the noisy binary image `logo_noise3.png` and display the noisy and enhanced images in the same figure.**
###Code
# remove the noise with the method of your choice
## I again use the same method as before since it is again salt-and-pepper ##
enhancedMedian3 = median_filter(noisy3, size = (3,3)) # Median filter 3x3
# display the noisy and enhanced images
fig, ax = plt.subplots(1, 2, figsize=(15,5))
ax[0].imshow(noisy3, cmap=plt.get_cmap('gray'))
ax[0].set_title('Noisy Image')
ax[1].imshow(enhancedMedian3, cmap=plt.get_cmap('gray'))
ax[1].set_title('Enhanced Image')
fig.tight_layout()
###Output
_____no_output_____
###Markdown
5. Image enhancement in frequency domain
###Code
from scipy import fftpack
# read noisy image 'periodic.tif' and compute its Fourier transform (see Assignment #2)
periodic = io.imread('periodic.tif')
periodic_fft = fftpack.fftshift(fftpack.fft2(periodic))
# display the noisy image and the magnitude of its Fourier transform in the same figure
fig, ax = plt.subplots(1, 2)
ax[0].imshow(periodic, vmin=0, vmax=255, cmap=plt.get_cmap('gray'))
ax[0].set_title('Periodic perturbation')
ax[0].axis('off')
ax[1].imshow(np.log(np.abs(periodic_fft)+1), cmap=plt.get_cmap('gray'))
ax[1].set_title('Magnitude of the FFT')
fig.tight_layout()
###Output
_____no_output_____
###Markdown
The image `periodic.tif` contains a periodic, i.e. sinusoidal, perturbation (see e.g. Section 5.2.3 in course book). You task is to remove the noise as well as you can. In practice, this consists of two main steps 1) locating the noise in the frequency domain, and 2) filtering the perturbation frequency using a proper filter. Let's take first a look at what a 2D sinusoidal signal looks like in the 2D Fourier space by plotting three signals with different frequencies, `f=2`, `f=4` and `f=8` and their Fourier transforms (FT):
###Code
# sample (x,y) image coordinate space linearly
nx = 100; ny = 100;
x = np.linspace(-1, 1, nx);
y = np.linspace(-1, 1, ny);
[X, Y] = np.meshgrid(x, y);
# plot the three 2D sinusoids and the magnitudes of their FTs
fig, ax = plt.subplots(2, 3)
f = 2;
z = np.sin(2*np.pi*f*X);
ax[0,0].imshow(z, cmap=plt.get_cmap('gray'))
ax[0,0].axis('off')
ax[0,0].set_title('sinusoid of frequency f = 2')
Z = fftpack.fftshift(fftpack.fft2(z))
ax[1,0].imshow((np.abs(Z)+1), cmap=plt.get_cmap('gray'))
ax[1,0].axis('off')
ax[1,0].set_title('magnitude of the respective FT')
f = 4;
z = np.sin(2*np.pi*f*X);
ax[0,1].imshow(z, cmap=plt.get_cmap('gray'))
ax[0,1].axis('off')
ax[0,1].set_title('sinusoid of frequency f = 4')
Z = fftpack.fftshift(fftpack.fft2(z))
ax[1,1].imshow((np.abs(Z)+1), cmap=plt.get_cmap('gray'))
ax[1,1].axis('off')
ax[1,1].set_title('magnitude of the respective FT')
f = 8;
z = np.sin(2*np.pi*f*X);
ax[0,2].imshow(z, cmap=plt.get_cmap('gray'))
ax[0,2].axis('off')
ax[0,2].set_title('sinusoid of frequency f = 8')
Z = fftpack.fftshift(fftpack.fft2(z))
ax[1,2].imshow((np.abs(Z)+1), cmap=plt.get_cmap('gray'))
ax[1,2].axis('off')
ax[1,2].set_title('magnitude of the respective FT')
fig.tight_layout()
###Output
_____no_output_____
###Markdown
As you can see, a horizontal 2D sinusoid corresponds to two horizontal peaks symmetric to the zero frequency in the magnitude of the Fourier domain and the higher the frequency the further away these peaks are from the origo. Now, let's take a look at what happens if we rotate the horizontal 2D sinusoid 15, 45 and 75 degrees:
###Code
# plot rotated 2D sinusoids and the magnitudes of their FTs
fig, ax = plt.subplots(2, 3)
theta = 15*np.pi/180;
z = np.sin(2*np.pi*f*(Y*np.sin(theta) + X*np.cos(theta)));
ax[0,0].imshow(z, cmap=plt.get_cmap('gray'))
ax[0,0].axis('off')
ax[0,0].set_title('sinusoid tilted at angle 15')
Z = fftpack.fftshift(fftpack.fft2(z))
ax[1,0].imshow((np.abs(Z)+1), cmap=plt.get_cmap('gray'))
ax[1,0].axis('off')
ax[1,0].set_title('magnitude of the respective FT')
theta = 45*np.pi/180;
z = np.sin(2*np.pi*f*(Y*np.sin(theta) + X*np.cos(theta)));
ax[0,1].imshow(z, cmap=plt.get_cmap('gray'))
ax[0,1].axis('off')
ax[0,1].set_title('sinusoid tilted at angle 45')
Z = fftpack.fftshift(fftpack.fft2(z))
ax[1,1].imshow((np.abs(Z)+1), cmap=plt.get_cmap('gray'))
ax[1,1].axis('off')
ax[1,1].set_title('magnitude of the respective FT')
theta = 75*np.pi/180;
z = np.sin(2*np.pi*f*(Y*np.sin(theta) + X*np.cos(theta)));
ax[0,2].imshow(z, cmap=plt.get_cmap('gray'))
ax[0,2].axis('off')
ax[0,2].set_title('sinusoid tilted at angle 75')
Z = fftpack.fftshift(fftpack.fft2(z))
ax[1,2].imshow((np.abs(Z)+1), cmap=plt.get_cmap('gray'))
ax[1,2].axis('off')
ax[1,2].set_title('magnitude of the respective FT')
fig.tight_layout()
###Output
_____no_output_____
###Markdown
Due to the properties of the 2D FT, the corresponding frequency peaks rotate exactly the same manner. Now, it should be clear(er) what the periodic perturbation we are dealing with looks like in the FT of the noisy image, i.e. where to look for it. Can you now spot the reason for the periodic perturbation in the spectral image of the image `periodic.tif`?
###Code
# display the magnitude of the FT
fig, ax = plt.subplots()
ax.imshow(np.log(np.abs(periodic_fft)+1), cmap=plt.get_cmap('gray'))
ax.set_title('magnitude of the FT of the image periodic.tif')
fig.tight_layout()
###Output
_____no_output_____
###Markdown
This kind of periodic perturbation should be filtered with a notch filter. However, in the following, an ideal band-reject filter is used for the sake of simplicity. So perform the following operations in the reserved code cells in order to remove the periodic perturbation from the test image.(Please note that you can also implement a notch filter instead if you prefer.) **5.1. Modify the ideal lowpass (or highpass) filter code from Assignment \2 to construct an ideal band-reject filter `Hbr` and display band-reject filters with cut-off frequency `D0=0.2` and bandwidths `W=0.05` and `W=0.01` in the same figure.**Hint: See lecture notes or course book what an ideal band-reject filter looks like. An ideal band-reject filter is just a combination of lowpass and highpass filtering, so now you need to combine the conditions `` into one filter in order to reject frequencies within the narrow band.
###Code
# create matrix D with absolute frequency values and size of the FT of the image 'periodic.tif'
n = periodic_fft.shape
f1 = ( np.arange(0,n[0])-np.floor(n[0]/2) ) * (2./(n[0]))
f2 = ( np.arange(0,n[1])-np.floor(n[1]/2) ) * (2./(n[1]))
f1, f2 = np.meshgrid(f1, f2)
D = np.sqrt(f1**2 + f2**2)
# set cut-off frequency 'D0' to 0.2
D0 = 0.2
# set the bandwidth 'W' to 0.05
W = 0.05
# initialize filter matrix 'Hbr' with ones (same size as the fft2 of the test image)
Hbr = np.ones(n)
# set frequencies > or < the threshold to zero, other remain unaltered
Hbr[D < W] = 0
Hbr[D > D0] = 0
# do the same to construct ideal band-reject filter with 'W' of 0.01
W2 = 0.01
Hbr2 = np.ones(n)
Hbr2[D < W2] = 0
Hbr2[D > D0] = 0
# display both filters with different bandwidths in the same figure
fig, ax = plt.subplots(1, 2, figsize=(15,5))
ax[0].imshow(Hbr, cmap=plt.get_cmap('gray'))
ax[0].set_title('W = 0.05 Plot')
ax[1].imshow(Hbr2, cmap=plt.get_cmap('gray'))
ax[1].set_title('W = 0.01 Plot')
fig.tight_layout()
###Output
_____no_output_____
###Markdown
**5.2. Find the perturbation frequency in the magnitude of the FT that should be filtered out and filter the noisy image with a band-reject filter having proper `D0` and `W`. Then. display the reconstructed filtered image and the magnitude of its FT in the same figure.**Hint: You should see two sharp peaks in the spectral image which should be filtered out. They are somewhat hard to spot but you should know where to look if you followed the introduction part of this assignment carefully. You can either try to determine the perturbation frequency: 1. manually by trial and error, or 2. automatically by finding the peak coordinates with __[`skimage.feature.peak_local_max()`](http://scikit-image.org/docs/dev/api/skimage.feature.htmlskimage.feature.peak_local_max)__ function and picking the corresponding relative frequency from the frequency matrix `D` based on the found peak locations.Please note that you will receive the same amount of points no matter which of the two approaches you choose!
###Code
# find perturbation frequency 'D0' manually or automatically
## Actually, it is easy to see peak values on 250s level but still, trying to see the result ##
from skimage import feature
print(feature.peak_local_max(np.abs(periodic_fft), threshold_rel = 0.1))
D0 = 0.256
W = 0.01
# create a filter mask 'Hbr' size of the FT of the test image
Hbr = np.ones(n)
# set frequencies within a _narrow_ reject band 'W' to zero, other remain unaltered
Hbr[D > D0] = 0.0
Hbr[D < W] = 0.0
# apply the ideal band-reject filter to fft the test image
HbrTest = Hbr * periodic_fft
# reconstruct the enhanced image (see Assignment #2)
HbrShifted = fftpack.ifftshift(HbrTest)
HbrT2 = fftpack.ifft2(HbrShifted)
HbrReal = np.real(HbrT2)
HbrClip = np.clip(HbrReal, 0, 255)
HbrClipFT = fftpack.fft2(HbrClip)
HbrClipFTCenter = fftpack.fftshift(HbrClipFT)
HbrValue = np.log(np.abs(HbrClipFTCenter) + 1)
# display the enhanced image and the magnitude of its FT
fig, ax = plt.subplots(1, 2, figsize=(15,5))
ax[0].imshow(HbrClip, cmap=plt.get_cmap('gray'))
ax[0].set_title('Enhanced Image')
ax[1].imshow(HbrValue, cmap=plt.get_cmap('gray'))
ax[1].set_title('FT')
fig.tight_layout()
###Output
_____no_output_____
###Markdown
**5.3. Finally, display the noisy image `periodic.tif` and the enhanced image in the same figure.**
###Code
# display noisy and "restored" image
fig, ax = plt.subplots(1, 2, figsize=(15,5))
ax[0].imshow(periodic, cmap=plt.get_cmap('gray'))
ax[0].set_title('Noisy Image')
ax[1].imshow(HbrClip, cmap=plt.get_cmap('gray'))
ax[1].set_title('Restored Image')
fig.tight_layout()
###Output
_____no_output_____ |
output/train.ipynb | ###Markdown
Optiver Realized Volatility Prediction - Train**This notebook seeks to EDITS HERE**--------- Files**book_[train/test].parquet** - A [parquet](https://arrow.apache.org/docs/python/parquet.html) file partitioned by `stock_id`. Provides order book data on the most competitive buy and sell orders entered into the market. The top two levels of the book are shared. The first level of the book will be more competitive in price terms, it will then receive execution priority over the second level. - `stock_id` - ID code for the stock. Not all `stock_id`s exist in every time bucket. Parquet coerces this column to the categorical data type when loaded; you may wish to convert it to int8. - `time_id` - ID code for the time bucket. `time_id`s are not necessarily sequential but are consistent across all stocks. - `seconds_in_bucket` - Number of seconds from the start of the bucket, always starting from 0. - `bid_price[1/2]` - Normalized prices of the most/second most competitive buy level. - `ask_price[1/2]` - Normalized prices of the most/second most competitive sell level. - `bid_size[1/2]` - The number of shares on the most/second most competitive buy level. - `ask_size[1/2]` - The number of shares on the most/second most competitive sell level. **trade_[train/test].parquet** - A [parquet](https://arrow.apache.org/docs/python/parquet.html) file partitioned by `stock_id`. Contains data on trades that actually executed. Usually, in the market, there are more passive buy/sell intention updates (book updates) than actual trades, therefore one may expect this file to be more sparse than the order book. - `stock_id` - Same as above. - `time_id` - Same as above. - `seconds_in_bucket` - Same as above. Note that since trade and book data are taken from the same time window and trade data is more sparse in general, this field is not necessarily starting from 0. - `price` - The average price of executed transactions happening in one second. Prices have been normalized and the average has been weighted by the number of shares traded in each transaction. - `size` - The sum number of shares traded. - `order_count` - The number of unique trade orders taking place. **train.csv** The ground truth values for the training set. - `stock_id` - Same as above, but since this is a csv the column will load as an integer instead of categorical. - `time_id` - Same as above. - `target` - The realized volatility computed over the 10 minute window following the feature data under the same `stock_id`/`time_id`. There is no overlap between feature and target data. **test.csv** Provides the mapping between the other data files and the submission file. As with other test files, most of the data is only available to your notebook upon submission with just the first few rows available for download. - `stock_id` - Same as above. - `time_id` - Same as above. - `row_id` - Unique identifier for the submission row. There is one row for each existing `stock_id`/`time_id` pair. Each time window is not necessarily containing every individual stock. **sample_submission.csv** - A sample submission file in the correct format. - `row_id` - Same as in test.csv. - `target` - Same definition as in **train.csv**. The benchmark is using the median target value from **train.csv**. Prepare Environment Import Packages
###Code
# General packages
import pandas as pd
import numpy as np
import pyarrow.parquet as pq # To handle parquet files
import os
import gc
import random
from tqdm import tqdm, tqdm_notebook
from pathlib import Path
import multiprocessing
from joblib import Parallel, delayed
import time
import warnings
warnings.filterwarnings('ignore')
# Data vis packages
import matplotlib.pyplot as plt
%matplotlib inline
# Data prep
from sklearn.preprocessing import RobustScaler
from sklearn.ensemble import RandomForestRegressor
from sklearn.feature_selection import SelectFromModel
from sklearn.model_selection import train_test_split
from sklearn.decomposition import PCA
# Modelling packages
import tensorflow as tf
from tensorflow import keras
from tensorflow.python.keras import backend as k
# Key layers
from tensorflow.keras.models import Model, Sequential, load_model
from tensorflow.keras.layers import Input, Add, Dense, Flatten
# Activation layers
from tensorflow.keras.layers import ReLU, LeakyReLU, ELU, ThresholdedReLU
# Dropout layers
from tensorflow.keras.layers import Dropout, AlphaDropout, GaussianDropout
# Normalisation layers
from tensorflow.keras.layers import BatchNormalization
# Embedding layers
from tensorflow.keras.layers import Embedding, Concatenate, Reshape
# Callbacks
from tensorflow.keras.callbacks import Callback, EarlyStopping, LearningRateScheduler, ModelCheckpoint
# Optimisers
from tensorflow.keras.optimizers import SGD, RMSprop, Adam, Adadelta, Adagrad, Adamax, Nadam, Ftrl
# Model cross validation and evaluation
from sklearn.model_selection import StratifiedKFold
from tensorflow.keras.losses import binary_crossentropy
# For Bayesian hyperparameter searching
from skopt import gbrt_minimize, gp_minimize
from skopt.utils import use_named_args
from skopt.space import Real, Categorical, Integer
strategy = tf.distribute.get_strategy()
REPLICAS = strategy.num_replicas_in_sync
# Data access
gpu_options = tf.compat.v1.GPUOptions(allow_growth=True)
session = tf.compat.v1.InteractiveSession(config=tf.compat.v1.ConfigProto(gpu_options=gpu_options))
# Get number of cpu cores for multiprocessing
try:
cpus = int(multiprocessing.cpu_count())
except NotImplementedError:
cpus = 1 # Default number of cores
print(f"Num GPUs Available: {len(tf.config.experimental.list_physical_devices('GPU'))}")
print(f"Num CPU Threads Available: {cpus}")
print(f'REPLICAS: {REPLICAS}')
###Output
_____no_output_____
###Markdown
Read in Data
###Code
# Set whether raw data need to be transformed
raw_data = True
# Data paths
comp_dir_path = Path("../input/optiver-realized-volatility-prediction")
if raw_data == True:
# Train paths
train_book_path = comp_dir_path/"book_train.parquet"
train_trade_path = comp_dir_path/"trade_train.parquet"
train_labels_path = comp_dir_path/"train.csv"
# Test paths
test_book_path = comp_dir_path/"book_test.parquet"
test_trade_path = comp_dir_path/"trade_test.parquet"
test_labels_path = comp_dir_path/"test.csv"
# Sample submission path
sample_sub_path = comp_dir_path/"sample_submission.csv"
# Define helper functions for data reading
def get_stock_ids_list(data_dir_path):
data_dir = os.listdir(data_dir_path)
# Get list of stock ids in directory
stock_ids = list(map(lambda x: x.split("=")[1], data_dir))
return stock_ids
def load_book_stock_id_data(stock_id):
# Get stock id extension
stock_id_ext = f"stock_id={stock_id}"
# Read individual stock parquet file
if is_train_test == "train":
book_stock_id_path = os.path.join(train_book_path, stock_id_ext)
elif is_train_test == "test":
book_stock_id_path = os.path.join(test_book_path, stock_id_ext)
book_stock_id = pd.read_parquet(book_stock_id_path)
# Add stock id feature from filename
book_stock_id["stock_id"] = int(stock_id)
return book_stock_id
def load_trade_stock_id_data(stock_id):
# Get stock id extension
stock_id_ext = f"stock_id={stock_id}"
# Read individual stock parquet file
if is_train_test == "train":
trade_stock_id_path = os.path.join(train_trade_path, stock_id_ext)
elif is_train_test == "test":
trade_stock_id_path = os.path.join(test_trade_path, stock_id_ext)
trade_stock_id = pd.read_parquet(trade_stock_id_path)
# Add stock id feature from filename
trade_stock_id["stock_id"] = int(stock_id)
return trade_stock_id
%%time
# Get list of stock ids
train_stock_ids = get_stock_ids_list(train_book_path)
test_stock_ids = get_stock_ids_list(test_book_path)
if raw_data == True:
# Read train data
is_train_test = "train"
# Create worker pool and read
pool = multiprocessing.Pool(processes=cpus)
train_book = pd.concat(pool.map(load_book_stock_id_data, train_stock_ids))
train_trade = pd.concat(pool.map(load_trade_stock_id_data, train_stock_ids))
train_labels = pd.read_csv(train_labels_path)
# Close worker pool
pool.close()
pool.join()
else:
train = pd.read_csv(comp_dir_path/"train_transformed.csv")
# Read test data
is_train_test = "test"
# Create worker pool and read
pool = multiprocessing.Pool(processes=cpus)
test_book = pd.concat(pool.map(load_book_stock_id_data, test_stock_ids))
test_trade = pd.concat(pool.map(load_trade_stock_id_data, test_stock_ids))
test_labels = pd.read_csv(test_labels_path)
# Read sample submission
sample_sub = pd.read_csv(sample_sub_path)
# Print data dimensions
print("TRAIN DATA DIMENSIONS")
if raw_data == True:
print(f"train_book shape: {train_book.shape}")
print(f"train_trade shape: {train_trade.shape}")
print(f"train_labels shape: {train_labels.shape}")
else:
print(f"train shape: {train.shape}")
print("\nTEST DATA DIMENSIONS")
print(f"test_book shape: {test_book.shape}")
print(f"test_trade shape: {test_trade.shape}")
print(f"test_labels shape: {test_labels.shape}\n")
###Output
_____no_output_____
###Markdown
Data Preparation Define Feature Engineering Functions
###Code
# Define helper functions for data manipulation
def apply_parallel(df_grouped, func):
"""
Uses multithreading for groupby and apply operations. Equivalent to df_grouped.apply(func)
"""
with multiprocessing.Pool(processes=cpus) as p:
ret_list = p.map(func, [group for name, group in df_grouped])
return pd.concat(ret_list)
def get_log_return(list_stock_prices):
return np.log(list_stock_prices).diff()
def get_trade_log_return(df_trade, col_stock_id, col_time_id, col_price):
"""
Returns the Log Return at each time ID.
"""
# Create worker pool and apply function
#trade_log_return =
trade_log_return = apply_parallel(df_trade.groupby([col_stock_id, col_time_id])[col_price], get_log_return)
trade_log_return = trade_log_return.fillna(0)
return trade_log_return
def get_agg_feature(df, col_name, func):
"""
Returns aggregated feature by stock ID and time ID based on input df and feature.
"""
if "function" in str(func):
func_str = str(func).split(" ")[1]
agg_feat_col_name = f"{col_name}_{func_str}"
else:
agg_feat_col_name = f"{col_name}_{func}"
agg_feat = df.groupby(by=["stock_id", "time_id"])[col_name].agg(func)
agg_feat = agg_feat.replace([np.inf, -np.inf], np.nan).fillna(0)
agg_feat = agg_feat.reset_index().rename(columns={col_name: agg_feat_col_name})
return agg_feat
def get_wap(df_book, col_bid_price, col_ask_price, col_bid_size, col_ask_size):
"""
Returns Weighted Average Price.
"""
wap_numerator = df_book[col_bid_price] * df_book[col_ask_size]
wap_numerator += df_book[col_ask_price] * df_book[col_bid_size]
wap_denominator = df_book[col_bid_size] + df_book[col_ask_size]
return wap_numerator / wap_denominator
def get_wap_combined(df_book, col_bid_price1, col_ask_price1, col_bid_size1, col_ask_size1,
col_bid_price2, col_ask_price2, col_bid_size2, col_ask_size2):
"""
Returns the Combined Weighted Average Price for both Bid and Ask features.
"""
wap_numerator1 = df_book[col_bid_price1] * df_book[col_ask_size1]
wap_numerator1 += df_book[col_ask_price1] * df_book[col_bid_size1]
wap_numerator2 = df_book[col_bid_price2] * df_book[col_ask_size2]
wap_numerator2 += df_book[col_ask_price2] * df_book[col_bid_size2]
wap_denominator = df_book[col_bid_size1] + df_book[col_ask_size1]
wap_denominator += df_book[col_bid_size2] + df_book[col_ask_size2]
return (wap_numerator1 + wap_numerator2) / wap_denominator
def get_wap_avg(df_book, col_bid_price1, col_ask_price1, col_bid_size1, col_ask_size1,
col_bid_price2, col_ask_price2, col_bid_size2, col_ask_size2):
"""
Returns the Combined Average Weighted Average Price for both Bid and Ask features.
"""
wap_numerator1 = df_book[col_bid_price1] * df_book[col_ask_size1]
wap_numerator1 += df_book[col_ask_price1] * df_book[col_bid_size1]
wap_numerator1 /= df_book[col_bid_size1] + df_book[col_ask_size1]
wap_numerator2 = df_book[col_bid_price2] * df_book[col_ask_size2]
wap_numerator2 += df_book[col_ask_price2] * df_book[col_bid_size2]
wap_numerator2 /= df_book[col_bid_size2] + df_book[col_ask_size2]
return (wap_numerator1 + wap_numerator2) / 2
def get_vol_wap(df_book, col_stock_id, col_time_id, col_wap):
"""
Returns the Volume Weighted Average Price at each time ID.
"""
vol_wap = df_book.groupby([col_stock_id, col_time_id])[col_wap].apply(get_log_return)
vol_wap = vol_wap.fillna(0)
return vol_wap
def get_bid_ask_spread(df_book, col_bid_price1, col_ask_price1, col_bid_price2, col_ask_price2):
"""
Get Combined bid ask spread using both Bid and Ask features.
"""
bas_numerator = df_book[[col_ask_price1, col_ask_price2]].min(axis=1)
bas_denominator = df_book[[col_bid_price1, col_bid_price2]].max(axis=1) - 1
return bas_numerator / bas_denominator
def get_vertical_spread(df_book, col_price1, col_price2):
"""
Returns the vertical spread for Bid/Ask price features inputted.
"""
v_spread = df_book[col_price1] - df_book[col_price2]
return v_spread
def get_spread_feature(df_book, col_price_a, col_price_b):
"""
Returns a spread feature based on the price features inputted.
"""
spread_feat = df_book[col_price_a] - df_book[col_price_b]
return spread_feat
def realized_volatility(series_log_return):
"""
Returns the realized volatility for a given period.
"""
return np.sqrt(np.sum(series_log_return**2))
def rmspe(y_true, y_pred):
"""
Returns the Root Mean Squared Prediction Error.
"""
rmspe = np.sqrt(np.mean(np.square((y_true - y_pred) / y_true)))
return rmspe
def get_row_id(df, col_stock_id, col_time_id):
"""
Returns row ids in format required for submission.
"""
row_ids = df[col_stock_id].astype("str") + "-" + df[col_time_id].astype("str")
return row_ids
# Compile data manipulation helper functions into complete functions
def extract_trade_feature_set(df_trade):
"""
Returns engineered trade dataset, where each row is a unique stock ID/time ID pair.
"""
print("Calculating trade log returns...")
# Get the Log return for trades by stock ID and time ID
df_trade["trade_log_return"] = get_trade_log_return(df_trade, "stock_id", "time_id", "price")
# Get aggregate statistics for specified numerical features
trade_features = ["price", "size", "order_count", "trade_log_return"]
print("Extracting aggregated trade features...")
time.sleep(1)
for trade_feature in tqdm(trade_features):
# Get min aggregations
df_trade = df_trade.merge(
get_agg_feature(df=df_trade, col_name=trade_feature, func="min"),
how="left",
on=["stock_id", "time_id"]
)
# Get max aggregations
df_trade = df_trade.merge(
get_agg_feature(df=df_trade, col_name=trade_feature, func="max"),
how="left",
on=["stock_id", "time_id"]
)
# Get mean aggregations
df_trade = df_trade.merge(
get_agg_feature(df=df_trade, col_name=trade_feature, func="mean"),
how="left",
on=["stock_id", "time_id"]
)
# Get std aggregations
df_trade = df_trade.merge(
get_agg_feature(df=df_trade, col_name=trade_feature, func="std"),
how="left",
on=["stock_id", "time_id"]
)
# Get sum aggregations
df_trade = df_trade.merge(
get_agg_feature(df=df_trade, col_name=trade_feature, func="sum"),
how="left",
on=["stock_id", "time_id"]
)
print("Finalising trade features...")
# Reduce trade df to just unique stock ID and time ID pairs
df_trade = df_trade.drop(["seconds_in_bucket", "price", "size", "order_count", "trade_log_return"], axis=1)
df_trade = df_trade.drop_duplicates().reset_index(drop=True)
return df_trade
def extract_book_feature_set(df_book):
"""
Returns engineered book dataset, where each row is a unique stock ID/time ID pair.
"""
# WAP for both bid/ask price/size features
df_book["wap1"] = get_wap(df_book, "bid_price1", "ask_price1", "bid_size1", "ask_size1")
df_book["wap2"] = get_wap(df_book, "bid_price2", "ask_price2", "bid_size2", "ask_size2")
# Combined WAP
df_book["wap_combined"] = get_wap_combined(
df_book, "bid_price1", "ask_price1", "bid_size1", "ask_size1",
"bid_price2", "ask_price2", "bid_size2", "ask_size2"
)
# Average WAP for both bid/ask price/size features
df_book["wap_avg"] = get_wap_avg(
df_book, "bid_price1", "ask_price1", "bid_size1", "ask_size1",
"bid_price2", "ask_price2", "bid_size2", "ask_size2"
)
# Get VWAPS based on different WAP features
df_book["vol_wap1"] = get_vol_wap(df_book, "stock_id", "time_id", "wap1")
df_book["vol_wap2"] = get_vol_wap(df_book, "stock_id", "time_id", "wap2")
df_book["vol_wap_combined"] = get_vol_wap(df_book, "stock_id", "time_id", "wap_combined")
df_book["vol_wap_avg"] = get_vol_wap(df_book, "stock_id", "time_id", "wap_avg")
# Get different spread features
df_book["bid_ask_spread"] = get_bid_ask_spread(df_book, "bid_price1", "ask_price1", "bid_price2","ask_price2")
df_book["bid_v_spread"] = get_vertical_spread(df_book, "bid_price1", "bid_price2")
df_book["ask_v_spread"] = get_vertical_spread(df_book, "ask_price1", "ask_price2")
df_book["h_spread1"] = get_spread_feature(df_book, "ask_price1", "bid_price1")
df_book["h_spread2"] = get_spread_feature(df_book, "ask_price2", "bid_price2")
df_book["spread_diff1"] = get_spread_feature(df_book, "ask_price1", "bid_price2")
df_book["spread_diff2"] = get_spread_feature(df_book, "ask_price2", "bid_price1")
print("Extracting aggregated VWAP book features")
time.sleep(1)
# Get aggregated volatility features for each VWAP
vol_features = ["vol_wap1", "vol_wap2", "vol_wap_combined", "vol_wap_avg"]
for vol_feature in tqdm(vol_features):
df_book = df_book.merge(
get_agg_feature(df=df_book, col_name=vol_feature, func=realized_volatility),
how="left",
on=["stock_id", "time_id"]
)
print("Extracting aggregated spread book features")
time.sleep(1)
# Get aggregated features for different spread features
spread_features = [
"bid_ask_spread", "bid_v_spread", "ask_v_spread", "h_spread1",
"h_spread2", "spread_diff1", "spread_diff2"
]
for spread_feature in tqdm(spread_features):
# Get min aggregations
df_book = df_book.merge(
get_agg_feature(df=df_book, col_name=spread_feature, func="min"),
how="left",
on=["stock_id", "time_id"]
)
# Get max aggregations
df_book = df_book.merge(
get_agg_feature(df=df_book, col_name=spread_feature, func="max"),
how="left",
on=["stock_id", "time_id"]
)
# Get mean aggregations
df_book = df_book.merge(
get_agg_feature(df=df_book, col_name=spread_feature, func="mean"),
how="left",
on=["stock_id", "time_id"]
)
# Get std aggregations
df_book = df_book.merge(
get_agg_feature(df=df_book, col_name=spread_feature, func="std"),
how="left",
on=["stock_id", "time_id"]
)
# Get sum aggregations
df_book = df_book.merge(
get_agg_feature(df=df_book, col_name=spread_feature, func="sum"),
how="left",
on=["stock_id", "time_id"]
)
# Reduce trade df to just unique stock ID and time ID pairs
df_book = df_book.drop([
"seconds_in_bucket", "bid_price1", "ask_price1", "bid_price2",
"ask_price2", "bid_size1", "ask_size1", "bid_size2", "ask_size2",
# WAP features
"wap1", "wap2", "wap_combined", "wap_avg", "vol_wap1",
"vol_wap2", "vol_wap_combined", "vol_wap_avg",
# Spread features
"bid_ask_spread", "bid_v_spread", "ask_v_spread", "h_spread1",
"h_spread2", "spread_diff1", "spread_diff2"
], axis=1)
df_book = df_book.drop_duplicates().reset_index(drop=True)
return df_book
def get_initial_feature_set(df_train, df_trade, df_book):
"""
Returns engineered feature set with labels, before preprocessing
"""
# Extract trade and book features
df_trade = extract_trade_feature_set(df_trade)
df_book = extract_book_feature_set(df_book)
# Merge trade and book features to labels
df_train = pd.merge(df_train, df_trade, how="inner", on=["stock_id", "time_id"])
df_train = pd.merge(df_train, df_book, how="inner", on=["stock_id", "time_id"])
return df_train
###Output
_____no_output_____
###Markdown
Full Data Manipulation Pipeline
###Code
# Define key parameters
baseline_model = True
SEED = 14
np.random.seed(SEED)
SCALER_METHOD = RobustScaler()
FEATURE_SELECTOR = RandomForestRegressor(random_state=SEED)
NUM_FEATURES = 500
PCA_METHOD = PCA(random_state=SEED)
EPOCHS = 100
BATCH_SIZE = 16
KFOLDS = 2
PATIENCE = 10
if baseline_model == True:
MODEL_TO_USE = "nn"
model_name_save = f"{MODEL_TO_USE}_final_classifier_seed_{str(SEED)}_baseline"
else:
MODEL_TO_USE = "nn"
model_name_save = f"{MODEL_TO_USE}_final_classifier_seed_{str(SEED)}"
print(f"Model name: {model_name_save}")
# Define full dataset transformation pipeline
def transform_dataset(X_train, X_val, y_train, y_val,
verbose=0,
scaler=SCALER_METHOD,
feature_selector=FEATURE_SELECTOR,
num_features=NUM_FEATURES,
pca=PCA_METHOD,
seed=SEED
):
"""
Takes in train and validation datasets, and applies feature transformations,
feature selection, scaling and pca (dependent on arguments).
Returns transformed X_train and X_val data ready for training/prediction.
"""
## DATA PREPARATION ##
# Get indices for train and validation dfs - we'll need these later
train_idx = list(X_train.index)
val_idx = list(X_val.index)
# Get train colnames before scaling and feature selection (minus ID features)
feat_cols = X_train.drop(["stock_id", "time_id"], axis=1).columns
# Get subset for ID features
train_id_feats = X_train[["stock_id", "time_id"]]
val_id_feats = X_val[["stock_id", "time_id"]]
## SCALING ##
if scaler != None:
if verbose == 1:
print("APPLYING SCALER...")
# Fit and transform scaler to train and val
scaler.fit(X_train.drop(["stock_id", "time_id"], axis=1))
X_train = scaler.transform(X_train.drop(["stock_id", "time_id"], axis=1))
X_val = scaler.transform(X_val.drop(["stock_id", "time_id"], axis=1))
# Convert scaled array back dataframe
X_train = pd.DataFrame(X_train, index=train_idx, columns=feat_cols)
X_train = pd.merge(train_id_feats, X_train, how="left", left_index=True, right_index=True)
X_val = pd.DataFrame(X_val, index=val_idx, columns=feat_cols)
X_val = pd.merge(val_id_feats, X_val, how="left", left_index=True, right_index=True)
## FEATURE SELECTION ##
# Feature selection is only ran on numerical data
if feature_selector != None:
if verbose == 1:
print("APPLYING FEATURE SELECTOR...")
cols_num = X_train.shape[1]
# Fit tree based classifier to select features
feature_selector_fit = SelectFromModel(estimator=feature_selector)
feature_selector_fit = feature_selector_fit.fit(X_train, y_train)
# Retrieve the names of the features selected for each label
feature_idx = feature_selector_fit.get_support()
selected_features = list(X_train.columns[feature_idx])
# Subset datasets to selected features only
X_train = X_train[selected_features]
X_val = X_val[selected_features]
if verbose == 1:
print(f"{cols_num - X_train.shape[1]} features removed in feature selection.")
## PCA ##
if pca != None:
if verbose == 1:
print("APPLYING PCA...")
# Fit and transform pca to train and val
pca.fit(X_train)
X_train = pca.transform(X_train)
X_val = pca.transform(X_val)
if verbose == 1:
print(f"NUMBER OF PRINCIPAL COMPONENTS: {pca.n_components_}")
# Convert numerical features into pandas dataframe and clean colnames
X_train = pd.DataFrame(X_train, index=train_idx).add_prefix("pca_")
X_val = pd.DataFrame(X_val, index=val_idx).add_prefix("pca_")
if verbose == 1:
print(f"TRAIN SHAPE: \t\t{X_train.shape}")
print(f"VALIDATION SHAPE: \t{X_val.shape}")
return X_train, X_val, selected_features
# If running baseline model, split into training data into train/test split
if baseline_model == True:
if raw_data == True:
# Run feature generation pipeline
train = get_initial_feature_set(train_labels, train_trade, train_book)
del train_labels, train_trade, train_book
X = train.drop("target", axis=1)
y = train[["target"]]
X_tdx, X_vdx, y_tdx, y_vdx = train_test_split(X, y, test_size=0.33, random_state=SEED)
X_tdx, X_vdx, selected_features = transform_dataset(X_tdx, X_vdx, y_tdx, y_vdx, verbose=1)
del X, y
train.to_csv(comp_dir_path/"train_transformed.csv")
###Output
_____no_output_____
###Markdown
Modelling Learning Scheduler
###Code
def build_lrfn(lr_start = 0.00001,
lr_max = 0.0008,
lr_min = 0.00001,
lr_rampup_epochs = 20,
lr_sustain_epochs = 0,
lr_exp_decay = 0.8):
lr_max = lr_max * strategy.num_replicas_in_sync
def lrfn(epoch):
if epoch < lr_rampup_epochs:
lr = (lr_max - lr_start) / lr_rampup_epochs * epoch + lr_start
elif epoch < lr_rampup_epochs + lr_sustain_epochs:
lr = lr_max
else:
lr = (lr_max - lr_min) * lr_exp_decay**(epoch - lr_rampup_epochs - lr_sustain_epochs) + lr_min
return lr
return lrfn
lrfn = build_lrfn()
lr = LearningRateScheduler(lrfn, verbose=0)
plt.plot([lrfn(epoch) for epoch in range(EPOCHS)])
plt.title('Learning Rate Schedule')
plt.xlabel('Epochs')
plt.ylabel('Learning Rate')
plt.show()
###Output
_____no_output_____
###Markdown
Define Baseline ModelThe below model was the original architecture, however when we conduct our Bayesian Hyperparameter search, we'll be playing around with the architecture of this baseline model a little. Parameter tuning will affect the model depth as well as the numbers of nodes at each layer, the dropout layers, activation functions and optimisers.
###Code
if baseline_model == True:
def get_model(X_train, y_train):
input_ = Input(shape=(X_train.shape[1], ))
x = Dense(2048, activation='relu')(input_)
x = BatchNormalization()(x)
x = Dropout(0.5)(x)
x = Dense(1024, activation='relu')(x)
x = BatchNormalization()(x)
x = Dropout(0.5)(x)
x = Dense(512, activation='relu')(x)
x = BatchNormalization()(x)
x = Dropout(0.5)(x)
x = Dense(256, activation='relu')(x)
x = BatchNormalization()(x)
x = Dropout(0.5)(x)
output = Dense(1, activation='linear')(x)
model = Model(input_, output)
return model
if baseline_model == True:
# Create model directory path if does not exist already
if not os.path.exists(f"models/{model_name_save}"):
os.mkdir(f"models/{model_name_save}")
fold = 0
model_name_save_path = f"models/{model_name_save}/{model_name_save}_{str(fold)}.h5"
# Define model
model = get_model(X_tdx, y_tdx)
# Compile model
model.compile(
optimizer="adam",
loss="mean_squared_error",
metrics=[rmspe]
)
# Define learning rate schedule
lr = LearningRateScheduler(lrfn, verbose=0)
# Define early stopping parameters
es = EarlyStopping(
monitor="val_loss",
mode="min",
restore_best_weights=True,
verbose=0,
patience=PATIENCE
)
# Define model checkpoint parameters
mc = ModelCheckpoint(
filepath=model_name_save_path,
save_best_only=True,
save_weights_only=False,
monitor="val_loss",
mode="min",
verbose=0
)
history = model.fit(
X_tdx, y_tdx,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
callbacks = [es, lr, mc],
verbose=1,
validation_split=0.25,
use_multiprocessing=True
)
###Output
_____no_output_____
###Markdown
Hotel Recognition to Combat Human Trafficking | Train 2021-05-09 Edward Sims 1.00 Import Packages
###Code
# General packages
import pandas as pd
import numpy as np
import os
import gc
import random
from tqdm import tqdm, tqdm_notebook
import cv2
from datetime import datetime as dt
import pickle
import time
import warnings
import multiprocessing
# Data vis packages
import matplotlib.pyplot as plt
%matplotlib inline
# Modelling packages
import tensorflow as tf
from tensorflow import keras
from tensorflow.python.keras import backend as k
# Key layers
from tensorflow.keras.models import Model, Sequential, load_model
from tensorflow.keras.layers import Input, Add, Dense, Flatten
# Activation layers
from tensorflow.keras.layers import ReLU, LeakyReLU, ELU, ThresholdedReLU
# Dropout layers
from tensorflow.keras.layers import Dropout, AlphaDropout, GaussianDropout
# Normalisation layers
from tensorflow.keras.layers import BatchNormalization
# Embedding layers
from tensorflow.keras.layers import Embedding, Concatenate, Reshape
# Callbacks
from tensorflow.keras.callbacks import Callback, EarlyStopping, LearningRateScheduler, ModelCheckpoint
# Optimisers
from tensorflow.keras.optimizers import SGD, RMSprop, Adam, Adadelta, Adagrad, Adamax, Nadam, Ftrl
# Model cross validation and evaluation
from collections import Counter, defaultdict
from sklearn.model_selection import KFold, GroupKFold
from tensorflow.keras.losses import sparse_categorical_crossentropy
# For Bayesian hyperparameter searching
from skopt import gbrt_minimize, gp_minimize
from skopt.utils import use_named_args
from skopt.space import Real, Categorical, Integer
# Package options
warnings.filterwarnings("ignore")
pd.set_option("display.max_columns", 50)
plt.rcParams["figure.figsize"] = [14, 8]
# Check GPU config
print(f"Number of GPUs Available: {len(tf.config.experimental.list_physical_devices('GPU'))}")
STRATEGY = tf.distribute.get_strategy()
REPLICAS = STRATEGY.num_replicas_in_sync
AUTO = tf.data.experimental.AUTOTUNE
print(f'REPLICAS: {REPLICAS}')
# Data access
GPU_OPTIONS = tf.compat.v1.GPUOptions(allow_growth=True)
# Get number of cpu cores for multiprocessing
try:
CPUS = 1#int(multiprocessing.cpu_count() / 2)
except NotImplementedError:
CPUS = 1 # Default number of cores
print(f"Number of CPU Cores: {CPUS}")
# Disable eager execution for mAP metric
#tf.compat.v1.disable_eager_execution()
###Output
Number of GPUs Available: 1
REPLICAS: 1
Number of CPU Cores: 1
###Markdown
2.00 Data Preparation 2.01 Read in Data
###Code
# Data paths
data_dir_path = "../input/hotel-id-2021-fgvc8"
train_images_dir_path = os.path.join(data_dir_path, "train_images")
test_images_dir_path = os.path.join(data_dir_path, "test_images")
train_metadata_path = os.path.join(data_dir_path, "train.csv")
sample_sub_path = os.path.join(data_dir_path, "sample_submission.csv")
# Read csv data
train_metadata = pd.read_csv(train_metadata_path, parse_dates=["timestamp"])
sample_sub = pd.read_csv(sample_sub_path)
# Remove 2 duplicated records from metadata
train_metadata_dupes = train_metadata.loc[train_metadata.groupby("image")["image"].transform("count") > 1, ]
train_metadata_dupes_idx = train_metadata_dupes.iloc[[1, 3]].index
train_metadata = train_metadata.drop(train_metadata_dupes_idx, axis=0)
###Output
_____no_output_____
###Markdown
2.02 Set default parameters
###Code
# Define key parameters
SEED = 14
np.random.seed(SEED)
# Default image dimensions
ROWS = 128 # Default row size
COLS = 128 # Default col size
CHANNELS = 3
# Default modelling parameters
EPOCHS = 100
BATCH_SIZE = 64
PATIENCE = 10
KFOLDS = 5
# Uncomment as appropriate
#MODEL_TO_USE = "densenet121"
#MODEL_TO_USE = "densenet169"
#MODEL_TO_USE = "densenet201"
#MODEL_TO_USE = "efficientnet_b0"
#MODEL_TO_USE = "efficientnet_b1"
#MODEL_TO_USE = "efficientnet_b2"
#MODEL_TO_USE = "efficientnet_b3"
#MODEL_TO_USE = "efficientnet_b4"
#MODEL_TO_USE = "efficientnet_b5"
#MODEL_TO_USE = "inception_resnetv2"
#MODEL_TO_USE = "inceptionv3"
#MODEL_TO_USE = "resnet50v2"
#MODEL_TO_USE = "resnet101v2"
#MODEL_TO_USE = "resnext50"
#MODEL_TO_USE = "resnext101"
#MODEL_TO_USE = "resnet152v2"
#MODEL_TO_USE = "vgg19"
MODEL_TO_USE = "xception"
# Initialise dataset for first time or use previously written data
INITIALISE_DATA = True
# Treat model as baseline or not
IS_BASELINE = True
if IS_BASELINE == True:
model_name_save = f"baseline_{MODEL_TO_USE}_{str(ROWS)}x{str(COLS)}_{str(KFOLDS)}folds_seed{str(SEED)}"
elif IS_BASELINE == False:
model_name_save = f"{MODEL_TO_USE}_{str(ROWS)}x{str(COLS)}_{str(KFOLDS)}folds_seed{str(SEED)}"
# Create models path if does not exist already
if not os.path.exists(f"models/{model_name_save}"):
os.mkdir(f"models/{model_name_save}")
print(f"Model name: {model_name_save}")
# Metadata preparation
def get_is_weekend(timestamp_col):
"""
Returns boolean for whether timestamp is a weekend.
"""
timestamp_col_weekday = timestamp_col.dt.weekday
# Allocate booleans - Weekends are designated 6 & 7
timestamp_col_weekday = timestamp_col_weekday.apply(lambda x: False if x < 5 else True)
return timestamp_col_weekday
# Extract year, month and hour from timestamp feature
train_metadata["year"] = train_metadata["timestamp"].dt.year
train_metadata["month"] = train_metadata["timestamp"].dt.month
train_metadata["hour"] = train_metadata["timestamp"].dt.hour
# Extract is_weekend from timestamp
train_metadata["is_weekend"] = get_is_weekend(train_metadata["timestamp"])
train_metadata = train_metadata.drop("timestamp", axis=1)
# Create full image path feature
train_metadata["image_path"] = train_images_dir_path + "/" + train_metadata["chain"].astype("str")
train_metadata["image_path"] = train_metadata["image_path"] + "/" + train_metadata["image"]
# Extract labels from metadata
y_train_vector = np.array(train_metadata["hotel_id"])
# Get all full image paths
train_images_path_vector = np.array(train_metadata["image_path"])
# Following metadata preparation, get number of classes and groups constants
NUM_CLASSES = np.max(y_train_vector) + 1
GROUPS = np.array(train_metadata["chain"], train_metadata["month"])
###Output
_____no_output_____
###Markdown
2.03 Read Images
###Code
def load_image(image_path, augment=False):
"""
Read an image from a file, decode it into a dense tensor, and resize.
"""
try:
image = tf.io.read_file(image_path)
image = tf.image.decode_jpeg(image)
image = tf.image.convert_image_dtype(image, tf.float32)
image = tf.image.resize(image, [ROWS, COLS])
if augment:
image = tf.image.random_flip_left_right(image)
image = tf.image.random_hue(image, 0.01)
image = tf.image.random_saturation(image, 0.7, 1.3)
image = tf.image.random_contrast(image, 0.8, 1.2)
image = tf.image.random_brightness(image, 0.1)
return image
except:
pass
def load_all_images(images_paths):
"""
Read in multiple images asynchrously using load_image() function.
"""
pool = multiprocessing.Pool(processes=CPUS)
images = pool.map(load_image, images_paths)
pool.close()
pool.join()
return images
# Create TFRecords from data if INITIALISE_DATA is True - otherwise skip this step
if INITIALISE_DATA == True:
# Helper functions to make feature definitions more readable
def _int64_feature(value):
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
def _bytes_feature(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
# Load data into a numpy array
category = train_metadata.loc[train_metadata.image == train_images_path_vector[0].split("/")[-1], "chain"]
category = category.item()
# Create TFRecord filewriter
writer = tf.io.TFRecordWriter("../input/tfrecords/test")
for image_path in train_images_path_vector[0:2]:
image = load_image(image_path)
image_name = image_path.split("/")[-1]
label = train_metadata.loc[train_metadata.image == image_name, "hotel_id"].item()
category = train_metadata.loc[train_metadata.image == image_name, "chain"].item()
feature = {
"label": _int64_feature(label),
"image": _bytes_feature(np.array(image).tostring())
}
example = tf.train.Example(features=tf.train.Features(feature=feature))
writer.write(example.SerializeToString())
writer.close()
###Output
_____no_output_____
###Markdown
2.04 Image Augmentations
###Code
def make_train_img_augmentations(img, y):
"""Make augmentations to single train image and copy label accordingly
Parameters
----------
img : array
Image to augment
y : array
Label array to copy as per number of augmentations
Returns
-------
np.array
np.array of original image and augmented images, and their corresponding labels.
"""
img_augs = np.concatenate(
(
np.expand_dims(img, axis=0),
# Flip left-right
np.expand_dims(np.fliplr(img), axis=0),
# Rotate 90 degrees clockwise
np.expand_dims(np.rot90(img, 1), axis=0),
# Rotate 180 degrees
np.expand_dims(np.rot90(img, 2), axis=0),
# Rotate 270 degrees clockwise
np.expand_dims(np.rot90(img, 3), axis=0)
),
axis=0
)
# Copy labels accordingly
y_augs = img_augs.shape[0]
y = np.repeat(y, y_augs)
return img_augs, y
def make_test_img_augmentations(img):
"""
Returns augmented test images and original for prediction.
"""
img_augs = np.concatenate(
(
np.expand_dims(img, axis=0),
np.expand_dims(np.rot90(img, 1), axis=0),
np.expand_dims(np.rot90(img, 2), axis=0),
np.expand_dims(np.rot90(img, 3), axis=0),
np.expand_dims(np.fliplr(img), axis=0),
np.expand_dims(np.fliplr(np.rot90(img, 1)), axis=0),
np.expand_dims(np.fliplr(np.rot90(img, 2)), axis=0),
np.expand_dims(np.fliplr(np.rot90(img, 3)), axis=0)),
axis=0
)
return(img_augs)
###Output
_____no_output_____
###Markdown
3.00 Modelling 3.01 Learning Rate
###Code
def build_lrfn(lr_start = 0.00001,
lr_max = 0.0008,
lr_min = 0.00001,
lr_rampup_epochs = 20,
lr_sustain_epochs = 0,
lr_exp_decay = 0.8):
lr_max = lr_max * STRATEGY.num_replicas_in_sync
def lrfn(epoch):
if epoch < lr_rampup_epochs:
lr = (lr_max - lr_start) / lr_rampup_epochs * epoch + lr_start
elif epoch < lr_rampup_epochs + lr_sustain_epochs:
lr = lr_max
else:
lr = (lr_max - lr_min) * lr_exp_decay**(epoch - lr_rampup_epochs - lr_sustain_epochs) + lr_min
return lr
return lrfn
lrfn = build_lrfn()
lr = LearningRateScheduler(lrfn, verbose=0)
plt.plot([lrfn(epoch) for epoch in range(EPOCHS)])
plt.title('Learning Rate Schedule')
plt.xlabel('Epochs')
plt.ylabel('Learning Rate')
plt.show()
###Output
_____no_output_____
###Markdown
3.02 Compiler Metrics
###Code
# Define Mean Average Precision at K metric
map_at_k = tf.compat.v1.metrics.average_precision_at_k
#y_true = np.array([[4], [4], [4], [4], [4]]).astype(np.int64)
#y_true = tf.identity(y_true)
#
#y_pred = np.array([[0.1, 0.3, 0.5, 0.7, 0.9, 0.1, 0.1, 0.2, 0.6],
# [0.1, 0.3, 0.5, 0.7, 0.9, 0.1, 0.1, 0.2, 0.6],
# [0.1, 0.3, 0.5, 0.7, 0.9, 0.1, 0.1, 0.2, 0.6],
# [0.1, 0.3, 0.5, 0.7, 0.9, 0.1, 0.1, 0.2, 0.6],
# [0.1, 0.3, 0.5, 0.7, 0.9, 0.1, 0.1, 0.2, 0.6]
# ]).astype(np.float32)
#y_pred = tf.identity(y_pred)
#
#_, m_ap = map_at_k(y_true, y_pred, 5)
#
#sess = tf.Session()
#sess.run(tf.local_variables_initializer())
#
#stream_vars = [i for i in tf.local_variables()]
#
#tf_map = sess.run(m_ap)
#print(tf_map)
#
#tmp_rank = tf.nn.top_k(y_pred, 5)
#
#print(sess.run(tmp_rank))
###Output
_____no_output_____
###Markdown
3.03 CNN Models
###Code
# The model we'll feed the images into before concatenation
def get_cnn_model(model_to_use=MODEL_TO_USE):
"""Get the pretrained CNN model specified.
Parameters
----------
kfold : int
Fold that the CV is currently on (to determine img size)
model_to_use : str
Model to retrieve
Returns
-------
model_return : tensorflow.python.keras.engine.functional.Functional
A pretrained CNN model without top included.
"""
input_shape = (ROWS, COLS, CHANNELS)
# DenseNet121
if model_to_use == "densenet121":
from tensorflow.keras.applications import DenseNet121
return DenseNet121(input_shape=input_shape, include_top=False)
# DenseNet169
elif model_to_use == "densenet169":
from tensorflow.keras.applications import DenseNet169
return DenseNet169(input_shape=input_shape, include_top=False)
# DenseNet201
elif model_to_use == "densenet201":
from tensorflow.keras.applications import DenseNet201
return DenseNet201(input_shape=input_shape, include_top=False)
# EfficientNet_B0
elif model_to_use == "efficientnet_b0":
import efficientnet.tfkeras as efficientnet
return efficientnet.EfficientNetB0(
input_shape=input_shape, include_top=False
)
# EfficientNet_B1
elif model_to_use == "efficientnet_b1":
import efficientnet.tfkeras as efficientnet
return efficientnet.EfficientNetB1(
input_shape=input_shape, include_top=False
)
# EfficientNet_B2
elif model_to_use == "efficientnet_b2":
import efficientnet.tfkeras as efficientnet
return efficientnet.EfficientNetB2(
input_shape=input_shape, include_top=False
)
# EfficientNet_B3
elif model_to_use == "efficientnet_b3":
import efficientnet.tfkeras as efficientnet
return efficientnet.EfficientNetB3(
input_shape=input_shape, include_top=False
)
# EfficientNet_B4
elif model_to_use == "efficientnet_b4":
import efficientnet.tfkeras as efficientnet
return efficientnet.EfficientNetB4(
input_shape=input_shape, include_top=False
)
# EfficientNet_B5
elif model_to_use == "efficientnet_b5":
import efficientnet.tfkeras as efficientnet
return efficientnet.EfficientNetB5(
input_shape=input_shape, include_top=False
)
# InceptionResNetV2
elif model_to_use == "inception_resnetv2":
from tensorflow.keras.applications import InceptionResNetV2
return InceptionResNetV2(input_shape=input_shape, include_top=False)
# InceptionV3
elif model_to_use == "inceptionv3":
from tensorflow.keras.applications import InceptionV3
return InceptionV3(input_shape=input_shape, include_top=False)
# NasNetLarge
elif model_to_use == "nasnetlarge":
from tensorflow.keras.applications import NASNetLarge
return NASNetLarge(input_shape=input_shape, include_top=False)
# ResNet50V2
elif model_to_use == "resnet50v2":
from tensorflow.keras.applications import ResNet50V2
return ResNet50V2(input_shape=input_shape, include_top=False)
# ResNet101V2
elif model_to_use == "resnet101v2":
from tensorflow.keras.applications import ResNet101V2
return ResNet101V2(input_shape=input_shape, include_top=False)
# ResNet152V2
elif model_to_use == "resnet152v2":
from tensorflow.keras.applications import ResNet152V2
return ResNet152V2(input_shape=input_shape, include_top=False)
# ResNeXt50
elif model_to_use == "resnext50":
from keras_applications.resnext import ResNeXt50
return ResNeXt50(
input_shape=input_shape,
include_top=False,
classes=classes,
backend=keras.backend,
layers=keras.layers,
models=keras.models,
utils=keras.utils
)
# ResNeXt101
elif model_to_use == "resnext101":
from keras_applications.resnext import ResNeXt101
return ResNeXt101(
input_shape=input_shape,
include_top=False,
classes=classes,
backend=keras.backend,
layers=keras.layers,
models=keras.models,
utils=keras.utils
)
# VGG19
elif model_to_use == "vgg19":
from tensorflow.keras.applications import VGG19
return VGG19(input_shape=input_shape, include_top=False)
# Xception
elif model_to_use == "xception":
from tensorflow.keras.applications import Xception
return Xception(input_shape=input_shape, include_top=False)
###Output
_____no_output_____
###Markdown
3.04 Create TF Record Dataset
###Code
def get_dataset(files, augment = False, shuffle = False, repeat = False,
labeled=True, return_image_names=True, batch_size=16, dim=256):
ds = tf.data.TFRecordDataset(files, num_parallel_reads=AUTO)
ds = ds.cache()
if repeat:
ds = ds.repeat()
if shuffle:
ds = ds.shuffle(1024*8)
opt = tf.data.Options()
opt.experimental_deterministic = False
ds = ds.with_options(opt)
if labeled:
ds = ds.map(read_labeled_tfrecord, num_parallel_calls=AUTO)
else:
ds = ds.map(lambda example: read_unlabeled_tfrecord(example, return_image_names),
num_parallel_calls=AUTO)
ds = ds.map(lambda img, imgname_or_label: (prepare_image(img, augment=augment, dim=dim),
imgname_or_label),
num_parallel_calls=AUTO)
ds = ds.batch(batch_size * REPLICAS)
ds = ds.prefetch(AUTO)
return ds
class DataGenerator(keras.utils.Sequence):
"""
Generates data for Keras
"""
def __init__(self, list_IDs, labels, batch_size, dim, n_channels, n_classes, shuffle):
"Initialization"
self.dim = dim
self.batch_size = batch_size
self.labels = labels
self.list_IDs = list_IDs
self.n_channels = n_channels
self.n_classes = n_classes
self.shuffle = shuffle
self.on_epoch_end()
def __len__(self):
"""
Denotes the number of batches per epoch
"""
return int(np.floor(len(self.list_IDs) / self.batch_size))
def __getitem__(self, index):
"""
Generate one batch of data
"""
# Generate indexes of the batch
indexes = self.indexes[index*self.batch_size: (index + 1)*self.batch_size]
# Find list of IDs
list_IDs_temp = [self.list_IDs[k] for k in indexes]
# Generate data
X, y = self.__data_generation(list_IDs_temp)
return X, y
def on_epoch_end(self):
"""
Updates indexes after each epoch
"""
self.indexes = np.arange(len(self.list_IDs))
if self.shuffle == True:
np.random.shuffle(self.indexes)
def __data_generation(self, list_IDs_temp):
"""
Generates data containing batch_size samples
"""
# Initialization
X = np.empty((self.batch_size, *self.dim, self.n_channels))
y = np.empty((self.batch_size), dtype=int)
# Generate data
for i, ID in enumerate(list_IDs_temp):
# Store sample
X[i,] = load_image(train_images_path_list[int(ID)])
# Store class
y[i] = self.labels[ID]
# Amend any inconsistent y label dimensions
y = np.append(y, np.expand_dims(np.array(self.n_classes - 1), axis=0), axis=0)
#print(f"self.n_classes: {self.n_classes}")
#print(f"np.max(y): {np.max(y)}")
return X, keras.utils.to_categorical(y)
###Output
_____no_output_____
###Markdown
3.05 Define and Train Baseline Model
###Code
def get_baseline_model(model_cnn=MODEL_TO_USE, verbose=1):
model_cnn = get_cnn_model(model_cnn)
# Add a global spatial average pooling layer
x = model_cnn.output
x = keras.layers.GlobalAveragePooling2D()(x)
# Define output layer
output = Dense(NUM_CLASSES, activation="softmax")(x)
# Define final model
model = Model(inputs=model_cnn.input, outputs=output)
return model
if IS_BASELINE == True:
# Define CV strategy
gkf = GroupKFold(n_splits=KFOLDS)
loss_scores = []
for fold, (tdx, vdx) in enumerate(gkf.split(train_images_path_list[0:1000], y_train_vector[0:1000], groups=GROUPS[0:1000])):
print(f"FOLD {fold}")
print("--------------------------------------------------------------------------------------------")
# Create name to save model by
model_save_path = f"models/{model_name_save}/{model_name_save}_{str(fold)}.h5"
print("\nGathering data...")
# Shuffle tdx and vdx
np.random.shuffle(tdx)
np.random.shuffle(vdx)
# Set parameter dictionary
params = {
"dim": (ROWS, COLS),
"batch_size": BATCH_SIZE,
"n_classes": NUM_CLASSES,
"n_channels": CHANNELS,
"shuffle": True
}
# Set dictionaries for generator
partition = {"train": tdx.astype('str'), "validation": vdx.astype('str')}
labels = dict(
zip(
np.concatenate((tdx, vdx)).astype('str'),
list(y_train_vector[np.concatenate((tdx, vdx))])
)
)
# Define data generators
training_generator = DataGenerator(partition["train"], labels, **params)
validation_generator = DataGenerator(partition["validation"], labels, **params)
# Get baseline model
print("Loading model...")
model = get_baseline_model()
# Compile model
model.compile(optimizer="adam", loss="categorical_crossentropy")
# Define learning rate schedule
lr = LearningRateScheduler(lrfn, verbose=0)
# Define early stopping parameters
es = EarlyStopping(
monitor="val_loss",
mode="min",
restore_best_weights=True,
verbose=0,
patience=PATIENCE
)
# Define model checkpoint parameters
mc = ModelCheckpoint(
filepath=model_save_path,
save_best_only=True,
save_weights_only=False,
monitor="val_loss",
mode="min",
verbose=0
)
# Fit model
print("Training model...")
history = model.fit(
x=training_generator,
validation_data=validation_generator,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
callbacks = [es, lr, mc],
use_multiprocessing=True,
workers=CPUS,
verbose=1
)
# Get val_loss for the best model (one saved with ModelCheckpoint)
loss = min(history.history["val_loss"])
print(f"LOSS: \t\t{loss}")
#print('MAKING VALIDATION PREDICTIONS...')
# Load best model
#model = load_model(model_save_name)
# Make validation predictions
#preds = model.predict(X_vdx_best_model)
#
## Calculate OOF loss
#oof_loss = metric(np.array(y_vdx_best_model), np.array(preds))
#print('FOLD ' + str(fold) + ' LOSS: ' + str(oof_loss))
#print('--------------------------------------------------------------------------------------------')
#time.sleep(2)
#loss_scores.append(oof_loss)
## Clean up
k.clear_session()
#gc.collect()
#os.remove(model_save_name_temp)
###Output
_____no_output_____
###Markdown
3.06 Bayesian Hyperparameter Search
###Code
# Define hyperparameter search dimensions
dim_learning_rate = Real(low=1e-4, high=1e-2, prior='log-uniform', name='learning_rate')
dim_num_dense_layers = Integer(low=1, high=6, name='num_dense_layers')
dim_num_input_nodes = Integer(low=1, high=4096, name='num_input_nodes')
dim_num_dense_nodes = Integer(low=1, high=4096, name='num_dense_nodes')
dim_activation = Categorical(categories=['relu','leaky_relu','elu','threshold_relu'], name='activation')
dim_batch_size = Integer(low=1, high=64, name='batch_size')
dim_patience = Integer(low=3, high=15, name='patience')
dim_optimiser = Categorical(
categories=['sgd','adam','rms_prop','ada_delta','ada_grad', 'ada_max','n_adam','ftrl'], name='optimiser'
)
dim_optimiser_decay = Real(low=1e-6, high=1e-2, name='optimiser_decay')
dim_dropout_layer = Categorical(categories=['dropout','gaussian_dropout','alpha_dropout'],name='dropout_layer')
dim_dropout_val = Real(low=0.1, high=0.8, name='dropout_val')
dimensions = [
dim_learning_rate,
dim_num_dense_layers,
dim_num_input_nodes,
dim_num_dense_nodes,
dim_activation,
dim_batch_size,
dim_patience,
dim_optimiser,
dim_optimiser_decay,
dim_dropout_layer,
dim_dropout_val,
]
# Set default hyperparameters
default_parameters = [
1e-3, # learning_rate
1, # num_dense_layers
512, # num_input_nodes
16, # num_dense_nodes
'relu', # activation
64, # batch_size
3, # patience
'adam', # optimiser
1e-3, # optimiser_decay
'dropout', # dropout_layer
0.1, # dropout_val
]
###Output
_____no_output_____
###Markdown
3.07 Train Model with Bayesian Hyperparameter Search
###Code
# Define CV strategy
kf = KFold(n_splits=KFOLDS, random_state=SEED)
loss_scores = []
best_params = pd.DataFrame(
columns=['kfold','selected_features','num_features', 'num_components', 'use_embedding', 'seed'])
for fold, (tdx, vdx) in enumerate(kf.split(X, y)):
print(f'FOLD {fold}')
print('--------------------------------------------------------------------------------------------------')
# Create name to save model by
model_save_name = 'models/' + model_name_save + '/' + model_name_save + '_' + str(fold) + '.h5'
model_save_name_temp = 'models/' + model_name_save + '/' + 'TEMP_'+ model_name_save+ '_' + str(fold) + '.h5'
@use_named_args(dimensions=dimensions)
def get_hyperopts(learning_rate,
num_dense_layers,
num_input_nodes,
num_dense_nodes,
activation,
batch_size,
patience,
optimiser,
optimiser_decay,
dropout_layer,
dropout_val):
# Define key parameters - these are affected by parameter search so must be done inside function
BATCH_SIZE = batch_size
PATIENCE = patience
# Fetch in-fold data
X_tdx, X_vdx, y_tdx, y_vdx = X.iloc[tdx, :], X.iloc[vdx, :], y.iloc[tdx, :], y.iloc[vdx, :]
# Define activation layers
if activation == 'relu':
ACTIVATION = ReLU()
elif activation == 'leaky_relu':
ACTIVATION = LeakyReLU()
elif activation == 'elu':
ACTIVATION = ELU()
elif activation == 'threshold_relu':
ACTIVATION = ThresholdedReLU()
# Define regularisation layers
if dropout_layer == 'dropout':
REG_LAYER = Dropout(dropout_val)
elif dropout_layer == 'gaussian_dropout':
REG_LAYER = GaussianDropout(dropout_val)
elif dropout_layer == 'alpha_dropout':
REG_LAYER = AlphaDropout(dropout_val)
# Define optimisers #
if optimiser == 'sgd':
OPTIMISER = SGD(lr=learning_rate, decay=optimiser_decay)
elif optimiser == 'adam':
OPTIMISER = RMSprop(lr=learning_rate, decay=optimiser_decay)
elif optimiser == 'rms_prop':
OPTIMISER = Adam(lr=learning_rate, decay=optimiser_decay)
elif optimiser == 'ada_delta':
OPTIMISER = Adadelta(lr=learning_rate, decay=optimiser_decay)
elif optimiser == 'ada_grad':
OPTIMISER = Adagrad(lr=learning_rate, decay=optimiser_decay)
elif optimiser == 'ada_max':
OPTIMISER = Adamax(lr=learning_rate, decay=optimiser_decay)
elif optimiser == 'n_adam':
OPTIMISER = Nadam(lr=learning_rate, decay=optimiser_decay)
elif optimiser == 'ftrl':
OPTIMISER = Ftrl(lr=learning_rate, decay=optimiser_decay)
## BUILD MODEL BASED ON INPUTTED BAYESIAN HYPERPARAMETERS ##
# Input layer #
if USE_EMBEDDING == 1:
inputs = []
embeddings = []
for col in cat_cols:
# Create categorical embedding for each categorical feature
input_ = Input(shape=(1,))
input_dim = int(X_tdx[col].max() + 1)
embedding = Embedding(input_dim=input_dim, output_dim=10, input_length=1)(input_)
embedding = Reshape(target_shape=(10,))(embedding)
inputs.append(input_)
embeddings.append(embedding)
input_numeric = Input(shape=(len(num_cols),))
embedding_numeric = Dense(num_input_nodes)(input_numeric)
embedding_numeric = ACTIVATION(embedding_numeric)
inputs.append(input_numeric)
embeddings.append(embedding_numeric)
x = Concatenate()(embeddings)
if USE_EMBEDDING == 0:
input_ = Input(shape=(X_tdx.shape[1], ))
x = Dense(num_input_nodes)(input_)
# Hidden layers #
for i in range(num_dense_layers):
layer_name = f'layer_dense_{i+1}'
x = Dense(num_dense_nodes, name=layer_name)(x)
x = ACTIVATION(x)
x = BatchNormalization()(x)
x = REG_LAYER(x)
# Output layer #
output = Dense(y.shape[1], activation='softmax')(x)
if USE_EMBEDDING == 1:
model = Model(inputs, output)
elif USE_EMBEDDING == 0:
model = Model(input_, output)
# COMPILE MODEL #
model.compile(optimizer=OPTIMISER,
loss='binary_crossentropy')
# Define learning rate schedule
lr = LearningRateScheduler(lrfn, verbose=0)
# Define early stopping parameters
es = EarlyStopping(monitor='val_loss',
mode='min',
restore_best_weights=True,
verbose=0,
patience=PATIENCE)
# Define model checkpoint parameters
mc = ModelCheckpoint(filepath=model_save_name_temp,
save_best_only=True,
save_weights_only=False,
monitor='val_loss',
mode='min',
verbose=0)
if USE_EMBEDDING == 1:
# Separate data to fit into embedding and numerical input layers
X_tdx = [np.absolute(X_tdx[i]) for i in cat_cols] + [X_tdx[num_cols]]
X_vdx = [np.absolute(X_vdx[i]) for i in cat_cols] + [X_vdx[num_cols]]
# FIT MODEL #
print('TRAINING...')
history = model.fit(X_tdx, y_tdx,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
callbacks = [es, lr, mc],
verbose=0,
validation_split=0.25
)
# Get val_loss for the best model (one saved with ModelCheckpoint)
loss = min(history.history['val_loss'])
print(f'CURRENT LOSS: \t\t{loss}')
# Save best loss and parameters to global memory
global best_loss
global best_params
# If the classification loss of the saved model is improved
if loss < best_loss:
model.save(model_save_name)
best_loss = loss
# Save transformed validation arrays (so they can be used for prediction)
global X_vdx_best_model, y_vdx_best_model
X_vdx_best_model, y_vdx_best_model = X_vdx, y_vdx
### SAVE MODEL PARAMETERS ###
best_params = best_params.loc[best_params.kfold != fold]
best_params = best_params.append({'kfold' : fold,
'selected_features': selected_features,
'num_features' : NUM_FEATURES,
'num_components' : NUM_COMPONENTS,
'use_embedding' : USE_EMBEDDING,
'seed' : SEED},
ignore_index=True)
best_params.to_csv('final_classifier_parameters/' + model_name_save + '.csv', index=False)
print(f'BEST LOSS: \t\t{best_loss}\n')
del model
k.clear_session()
return(loss)
## RUN BAYESIAN HYPERPARAMETER SEARCH ##
print('RUNNING PARAMETER SEARCH...\n')
time.sleep(2)
best_loss = np.Inf
search_iteration = 1
gp_result = gp_minimize(func = get_hyperopts,
dimensions = dimensions,
acq_func = 'EI', # Expected Improvement.
n_calls = 50,
noise = 0.01,
n_jobs = -1,
kappa = 5,
x0 = default_parameters,
random_state = SEED
)
print('\nSEARCH COMPLETE.')
print('MAKING VALIDATION PREDICTIONS...')
# Load best model
model = load_model(model_save_name)
# Make validation predictions
preds = model.predict(X_vdx_best_model)
# Calculate OOF loss
oof_loss = metric(np.array(y_vdx_best_model), np.array(preds))
print('FOLD ' + str(fold) + ' LOSS: ' + str(oof_loss))
print('--------------------------------------------------------------------------------------------------')
time.sleep(2)
loss_scores.append(oof_loss)
# Clean up
gc.collect()
os.remove(model_save_name_temp)
###Output
_____no_output_____
###Markdown
SIIM-ISIC - Train In this notebook we focus on: 1. Reading in the data - Data manipulation and preparation - Image augmentation - Model architecture creation - Cross Validation strategy creation - Model training - Test-time augmentations (TTAs) - Submission creation Ideas I wasn't able to explore:- External data- Focal loss- TFRecords- Having different img size inputs within CV strategy (e.g. fold 0 = 128x128, fold 1 = 256x256)- Pixel normalisation / centering. For some reason this was computationally too expensive, so maybe I didn't do it properly... Things I learned: - Setting up the entire pipeline simply from the start. All the way from data reading to submission - a baseline procedure in place at the beginning makes it so much easier to plug and play. - It is so important to track and record all your experiments! And in as much detail as possible!! - Having a development environment style notebook, where if you want to change parameters it is very easy to do and the code doesn't break. - Test time augmentations are critical in both OOF training validation AND test predictions. Under the impression so far that any TTAs you do in generating submission predictions should be the same as in making OOF predictions (although would appreciate being corrected if this is not the case). - Looping through images and individually doing augmentations are more time effective than doing augmentations to an entire batch. - Learning rate schedules are so important - they need to fit in with your own specific model patterns and nuances. There is no one-size-fits all schedule. - Definitely consider model checkpoints, and early stopping continues to be effective (particularly on a second validation set). - Doing slightly different things in each fold (changing augs slightly, for example) can be effective. - Starting training iterations on small image sizes (128x128) while you work out baseline architecture, before increasing to larger images with a more established architecture can save a lot of time. But be careful about memory capability! - If possible, train the same model arhitectures under different seeds and compare CV differences as a way to avoid overfitting. These can even be ensembled at the end. - Batch normalisation and image normalistion are completely different, and some people don't realise this! - The secret to success, is a robust and effective CV strategy. 1.00 Load Packages
###Code
# General packages
import pandas as pd
import numpy as np
import re
import os
import gc
import json
import math
import random
from tqdm import tqdm, tqdm_notebook
#tqdm_notebook().pandas()
import datetime
import time
import warnings
warnings.filterwarnings('ignore')
from collections import Counter, defaultdict
# Data vis packages
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
# Data prep
import pydicom as dicom # to handle dicom files
import cv2
import imgaug.augmenters as iaa
from sklearn.utils import shuffle
from sklearn.model_selection import train_test_split
from sklearn import metrics
# Modelling packages
import tensorflow as tf
from tensorflow import keras
from tensorflow.python.keras import backend as k
from tensorflow.keras.layers import Input, Add, Dense, BatchNormalization, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D, AveragePooling2D, GlobalAveragePooling2D, concatenate
from tensorflow.keras.models import Model, load_model
from tensorflow.keras.callbacks import Callback, EarlyStopping, LearningRateScheduler, ModelCheckpoint
from sklearn.model_selection import GroupKFold
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
strategy = tf.distribute.get_strategy()
REPLICAS = strategy.num_replicas_in_sync
print(f'REPLICAS: {REPLICAS}')
# Data access
gpu_options = tf.compat.v1.GPUOptions(allow_growth=True)
###Output
Num GPUs Available: 1
REPLICAS: 1
###Markdown
2.00 Read Data
###Code
# Define paths - mel stands for melanoma
input_path = '../input'
mel_dir_path = os.path.join(input_path, 'siim-isic-melanoma-classification')
train_metadata_path = os.path.join(mel_dir_path, 'train.csv')
test_metadata_path = os.path.join(mel_dir_path, 'test.csv')
sample_sub_path = os.path.join(mel_dir_path, 'sample_submission.csv')
train_img_path = os.path.join(mel_dir_path, 'train')
test_img_path = '512x512_jpgs/test'
# Read train metadata
train_metadata = pd.read_csv(train_metadata_path)
# Read sample submission
sample_sub = pd.read_csv(sample_sub_path)
preprocessed_images_path = 'preprocessed_images/'
# Remove duplicates
duplicates = pd.read_csv('2020_Challenge_duplicates.csv')
train_metadata = train_metadata[(~train_metadata['image_name'].isin(duplicates['ISIC_id_paired']))]
# Some definitions going forward
ROWS = 512 # Default row size
COLS = 512 # Default col size
CHANNELS = 3
EPOCHS = 8
BATCH_SIZE = 4
CLASSES = 2
# Read all images in and subset in CV? Or Read images inside each fold in CV?
read_images_in_fold = True
# Uncomment as appropriate
#MODEL_TO_USE = 'densenet201'
#MODEL_TO_USE = 'inception_resnetv2'
#MODEL_TO_USE = 'xception'
#MODEL_TO_USE = 'inceptionv3'
#MODEL_TO_USE = 'vgg19'
MODEL_TO_USE = 'efficientnet_b5'
####MODEL_TO_USE = 'resnext101'
#MODEL_TO_USE = 'resnet152v2'
####MODEL_TO_USE = 'efficientnet_b0'
####MODEL_TO_USE = 'efficientnet_b1'
####MODEL_TO_USE = 'efficientnet_b2'
####MODEL_TO_USE = 'efficientnet_b3'
####MODEL_TO_USE = 'efficientnet_b4'
####MODEL_TO_USE = 'densenet169'
####MODEL_TO_USE = 'densenet121'
####MODEL_TO_USE = 'resnet50v2'
####MODEL_TO_USE = 'resnet101v2'
####MODEL_TO_USE = 'resnext50'
# Parameters for each fold
# standard_models = [128, 256, 384, 512]
# efficient_nets = [224, 240, 260, 300, 380, 456]
kfold_params = {
0: {'ROWS':ROWS,'COLS':COLS,'AUG':'fliplr'},
1: {'ROWS':ROWS,'COLS':COLS,'AUG':'rot90' },
2: {'ROWS':ROWS,'COLS':COLS,'AUG':'rot180'},
3: {'ROWS':ROWS,'COLS':COLS,'AUG':'rot270'},
4: {'ROWS':ROWS,'COLS':COLS,'AUG':'fliplr'},
5: {'ROWS':ROWS,'COLS':COLS,'AUG':'rot90' },
6: {'ROWS':ROWS,'COLS':COLS,'AUG':'rot180'},
7: {'ROWS':ROWS,'COLS':COLS,'AUG':'rot270'}
}
KFOLDS = len(kfold_params)
SEED = 14
np.random.seed(SEED)
model_name_save = MODEL_TO_USE + '_' + str(ROWS) + 'x' + str(COLS) + '_seed' + str(SEED)
# Create weights path if does not exist already
if not os.path.exists(f'weights/{model_name_save}'):
os.mkdir(f'weights/{model_name_save}')
print(f'Model name: {model_name_save}')
y_train = train_metadata['target']
def read_jpgs(filenames, rows, cols, loading_bar=True):
# Read images in
image_list = []
if loading_bar == True:
for image_name in tqdm(filenames):
image_path = os.path.join(preprocessed_images_path, image_name) + '.jpg'
image = cv2.imread(image_path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = cv2.resize(image,(rows,cols))
image_list.append(image)
elif loading_bar == False:
for image_name in filenames:
image_path = os.path.join(preprocessed_images_path, image_name) + '.jpg'
image = cv2.imread(image_path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = cv2.resize(image,(rows,cols))
image_list.append(image)
return(image_list)
def prepare_images(use_raw_images=False):
if use_raw_images == True:
for del_filename in os.listdir(preprocessed_images_path):
del_file_path = os.path.join(preprocessed_images_path, del_filename)
try:
if os.path.isfile(del_file_path) or os.path.islink(del_file_path):
os.unlink(del_file_path)
elif os.path.isdir(del_file_path):
shutil.rmtree(del_file_path)
except Exception as e:
print('Failed to delete %s. Reason: %s' % (del_file_path, e))
# Read images in
image_list = []
filenames = train_metadata['image_name']
for image_name in tqdm(filenames):
image_path = os.path.join(train_img_path, image_name) + '.dcm'
# Read the dcm image in
image = dicom.dcmread(image_path).pixel_array
res = cv2.resize(image,(ROWS,COLS))
image_list.append(res)
# Save processed image
new_filename = preprocessed_images_path + image_name + '.jpg'
cv2.imwrite(new_filename, res)
elif use_raw_images == False:
image_list = read_jpgs(filenames=train_metadata['image_name'])
return image_list
if read_images_in_fold == False:
X_train_img = np.array(prepare_images())
print(f'X_train_img shape: {X_train_img.shape}')
y_train = np.array(y_train)
###Output
_____no_output_____
###Markdown
3.00 Data Preprocessing 3.01 Train Metadata
###Code
# Remove diagnosis as too many 'unknown' values
# Remove benign_malignant as the same as target variable
train_df = train_metadata.drop(['diagnosis','benign_malignant'], axis=1)
# Replace whitespace in anatom_site_general_challenge with underscore
train_df['anatom_site_general_challenge'] = train_df[
'anatom_site_general_challenge'].replace(' ', '_', regex=True)
# Encode sex feature
train_df = train_df.merge(pd.get_dummies(train_df[
['sex','anatom_site_general_challenge']]), left_index=True, right_index=True)
train_df['age_approx'] = train_df['age_approx'].fillna(0)
train_df.drop(['sex', 'anatom_site_general_challenge'], axis=1, inplace=True)
train_df.head()
X_train_df = np.asarray(train_df.drop(['patient_id', 'target'], axis=1))
y_train = np.asarray(train_df['target'])
groups = list(train_df['patient_id'])
del [train_metadata, train_df, duplicates]
###Output
_____no_output_____
###Markdown
3.02 Train ImagesStandardise images by subtracting the per-channel mean for the training dataset and dividing by the per-channel standard deviation for the whole training dataset.
###Code
def preprocess_imgs(train_imgs, test_imgs):
"""
Centers images by minusing the mean and dividing by std
*train_imgs: (array) train images to read in and normalise
*test_imgs: (array) test images to read in and normalise
"""
print('Preprocessing images...\n')
# Convert pixel values to float
train_imgs = train_imgs.astype(float)
test_imgs = test_imgs.astype(float)
# Get per-channel means and stds
train_means = train_imgs.reshape(-1, train_imgs.shape[-1]).mean(axis=0)
train_stds = train_imgs.reshape(-1, train_imgs.shape[-1]).std(axis=0)
# Standardise images
train_imgs -= train_means
train_imgs /= train_stds
test_imgs -= train_means
test_imgs /= train_stds
#print(f'Train per-channel means: {train_imgs.reshape(-1, train_imgs.shape[-1]).mean(axis=0)}')
#print(f'Trin per-channel stds: {train_imgs.reshape(-1, train_imgs.shape[-1]).std(axis=0)}')
return(train_imgs, test_imgs)
###Output
_____no_output_____
###Markdown
4.00 Train Data Augmentation
###Code
# Create augmentation pipelines
def make_train_augmentations(X_img, X_met, y, p, aug):
"""
Make a random subset of p proportion. Apply augmentations
to the subset and append back to the original dataset,
making necessary changes to labels.
*X_img: (array) Train images to read in and augment
*X_met: (array) Train metadata to copy as per augmentated images
*y: (array) Train labels to copy as per augmented images
*p: (float) sample size probability
*aug: (string) ['fliplr', 'rot90', 'rot180', 'rot270']
"""
print('Augmenting images...')
# Get a sample of X and y based on p proportion
sample_size = int(round(len(y) * p))
idx_sample = random.sample(range(0, len(y), 1), sample_size)
# Make augmentations to sample
if aug == 'fliplr':
X_img = np.concatenate((X_img,
np.array([np.fliplr(X_img[i]) for i in idx_sample])),
axis=0)
elif aug == 'rot90':
X_img = np.concatenate((X_img,
np.array([np.rot90(X_img[i], 1) for i in idx_sample])),
axis=0)
elif aug == 'rot180':
X_img = np.concatenate((X_img,
np.array([np.rot90(X_img[i], 2) for i in idx_sample])),
axis=0)
elif aug == 'rot270':
X_img = np.concatenate((X_img,
np.array([np.rot90(X_img[i], 3) for i in idx_sample])),
axis=0)
# Copy metadata accordingly
X_met_sample = np.array([X_met[i] for i in idx_sample])
X_met = np.concatenate((X_met, X_met_sample), axis=0)
del X_met_sample
# Copy labels accordingly
y_sample = np.array([y[i] for i in idx_sample])
y = np.concatenate((y, y_sample), axis=0)
del y_sample
#X_img, X_met, y = shuffle(X_img, X_met, y, random_state=SEED)
return(X_img, X_met, y)
if read_images_in_fold == False:
print(f'Train imgs shape: {X_train_img.shape}')
print(f'Train dataframe shape: {X_train_df.shape}')
print(f'Train targets shape: {y_train.shape}')
###Output
Train dataframe shape: (32701, 10)
Train targets shape: (32701,)
###Markdown
5.00 Modelling 5.01 Class Weighting
###Code
# Due to the high data imbalance, we add extra weight to the target class
neg, pos = np.bincount(y_train)
weight_for_0 = (1 / neg)*(len(y_train)) / 2.0
weight_for_1 = (1 / pos)*(len(y_train)) / 2.0
class_weight = {0: weight_for_0, 1: weight_for_1}
print('Weight for class 0: {:.2f}'.format(weight_for_0))
print('Weight for class 1: {:.2f}'.format(weight_for_1))
###Output
Weight for class 0: 0.51
Weight for class 1: 28.14
###Markdown
5.02 Learning Scheduler
###Code
def build_lrfn(lr_start = 0.000005,
lr_max = 0.000020 * strategy.num_replicas_in_sync,
lr_min = 0.000001,
lr_rampup_epochs = 4,
lr_sustain_epochs = 0,
lr_decay = 0.8):
def lrfn(epoch):
if epoch < lr_rampup_epochs:
lr = (lr_max - lr_start) / lr_rampup_epochs * epoch + lr_start
elif epoch < lr_rampup_epochs + lr_sustain_epochs:
lr = lr_max
else:
lr = (lr_max - lr_min) * lr_decay**(epoch - lr_rampup_epochs - lr_sustain_epochs) + lr_min
return lr
return lrfn
lrfn = build_lrfn()
plt.plot([lrfn(epoch) for epoch in range(EPOCHS)])
plt.title('Learning Rate Schedule')
plt.xlabel('Epochs')
plt.ylabel('Learning Rate')
plt.show()
###Output
_____no_output_____
###Markdown
5.03 Compiler Metrics
###Code
# Define metrics to observe while training
METRICS = [keras.metrics.AUC(name='auc')]
###Output
_____no_output_____
###Markdown
5.04 Metdata Model
###Code
# The model we'll feed the metadata into before concatenation
model_metadata = keras.Sequential()
if read_images_in_fold == True:
model_metadata.add(keras.layers.Dense(256, activation='relu', input_shape=(X_train_df.shape[1] - 1,)))
elif read_images_in_fold == False:
model_metadata.add(keras.layers.Dense(256, activation='relu', input_shape=(X_train_df.shape[1],)))
model_metadata.add(keras.layers.BatchNormalization())
model_metadata.add(keras.layers.Dropout(0.2))
model_metadata.add(keras.layers.Dense(256, activation='relu'))
model_metadata.add(keras.layers.BatchNormalization())
model_metadata.add(keras.layers.Dropout(0.4))
###Output
_____no_output_____
###Markdown
5.05 CNN Models
###Code
# The model we'll feed the images into before concatenation
def get_cnn_model(kfold, model_to_use=MODEL_TO_USE, verbose=1):
"""
Returns the model object and the name of the final layer in the model.
*kfold: (int) fold that the CV is currently on (to determine img size)
*model_to_use: (string) model to retrieve
*verbose: ([0,1]) level of output communication. 0=None, 1=All.
"""
if verbose == 1:
print('\nLoading pretrained model...')
densenet121_weights = 'pretrained/densenet121_weights_tf_dim_ordering_tf_kernels_notop.h5'
densenet169_weights = 'pretrained/densenet169_weights_tf_dim_ordering_tf_kernels_notop.h5'
densenet201_weights = 'pretrained/densenet201_weights_tf_dim_ordering_tf_kernels_notop.h5'
efficientnet_b0_weights = 'pretrained/efficientnet-b0_imagenet_1000_notop.h5'
efficientnet_b1_weights = 'pretrained/efficientnet-b1_imagenet_1000_notop.h5'
efficientnet_b2_weights = 'pretrained/efficientnet-b2_imagenet_1000_notop.h5'
efficientnet_b3_weights = 'pretrained/efficientnet-b3_imagenet_1000_notop.h5'
efficientnet_b4_weights = 'pretrained/efficientnet-b4_imagenet_1000_notop.h5'
efficientnet_b5_weights = 'pretrained/efficientnet-b5_imagenet_1000_notop.h5'
inception_resnetv2_weights = 'pretrained/inception_resnet_v2_weights_tf_dim_ordering_tf_kernels_notop.h5'
inceptionv3_weights = 'pretrained/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5'
resnet50v2_weights = 'pretrained/resnet50v2_weights_tf_dim_ordering_tf_kernels_notop.h5'
resnet101v2_weights = 'pretrained/resnet101v2_weights_tf_dim_ordering_tf_kernels_notop.h5'
resnet152v2_weights = 'pretrained/resnet152v2_weights_tf_dim_ordering_tf_kernels_notop.h5'
resnext50_weights = 'pretrained/resnext50_weights_tf_dim_ordering_tf_kernels_notop.h5'
resnext101_weights = 'pretrained/resnext101_weights_tf_dim_ordering_tf_kernels_notop.h5'
vgg19_weights = 'pretrained/vgg19_weights_tf_dim_ordering_tf_kernels_notop.h5'
xception_weights = 'pretrained/xception_weights_tf_dim_ordering_tf_kernels_notop.h5'
input_shape = (
kfold_params[kfold]['ROWS'],
kfold_params[kfold]['COLS'],
CHANNELS
)
# DenseNet121
if model_to_use == 'densenet121':
from tensorflow.keras.applications import DenseNet121
model_return = DenseNet121(include_top=False, weights=densenet121_weights,
input_shape=input_shape)
# DenseNet169
elif model_to_use == 'densenet169':
from tensorflow.keras.applications import DenseNet169
model_return = DenseNet169(include_top=False, weights=densenet169_weights,
input_shape=input_shape)
# DenseNet201
elif model_to_use == 'densenet201':
from tensorflow.keras.applications import DenseNet201
model_return = DenseNet201(include_top=False, weights=densenet201_weights,
input_shape=input_shape)
# EfficientNet_B0
elif model_to_use == 'efficientnet_b0':
import efficientnet.tfkeras as efficientnet
model_return = efficientnet.EfficientNetB0(include_top=False, weights=efficientnet_b0_weights,
input_shape=input_shape)
# EfficientNet_B1
elif model_to_use == 'efficientnet_b1':
import efficientnet.tfkeras as efficientnet
model_return = efficientnet.EfficientNetB1(include_top=False, weights=efficientnet_b1_weights,
input_shape=input_shape)
# EfficientNet_B2
elif model_to_use == 'efficientnet_b2':
import efficientnet.tfkeras as efficientnet
model_return = efficientnet.EfficientNetB2(include_top=False, weights=efficientnet_b2_weights,
input_shape=input_shape)
# EfficientNet_B3
elif model_to_use == 'efficientnet_b3':
import efficientnet.tfkeras as efficientnet
model_return = efficientnet.EfficientNetB3(include_top=False, weights=efficientnet_b3_weights,
input_shape=input_shape)
# EfficientNet_B4
elif model_to_use == 'efficientnet_b4':
import efficientnet.tfkeras as efficientnet
model_return = efficientnet.EfficientNetB4(include_top=False, weights=efficientnet_b4_weights,
input_shape=input_shape)
# EfficientNet_B5
elif model_to_use == 'efficientnet_b5':
import efficientnet.tfkeras as efficientnet
model_return = efficientnet.EfficientNetB5(include_top=False, weights=efficientnet_b5_weights,
input_shape=input_shape)
# InceptionResNetV2
elif model_to_use == 'inception_resnetv2':
from tensorflow.keras.applications import InceptionResNetV2
model_return = InceptionResNetV2(include_top=False, weights=inception_resnetv2_weights,
input_shape=input_shape)
# InceptionV3
elif model_to_use == 'inceptionv3':
from tensorflow.keras.applications import InceptionV3
model_return = InceptionV3(include_top=False, weights=inceptionv3_weights,
input_shape=input_shape)
# ResNet50V2
elif model_to_use == 'resnet50v2':
from tensorflow.keras.applications import ResNet50V2
model_return = ResNet50V2(include_top=False, weights=resnet50v2_weights,
input_shape=input_shape)
# ResNet101V2
elif model_to_use == 'resnet101v2':
from tensorflow.keras.applications import ResNet101V2
model_return = ResNet101V2(include_top=False, weights=resnet101v2_weights,
input_shape=input_shape)
# ResNet152V2
elif model_to_use == 'resnet152v2':
from tensorflow.keras.applications import ResNet152V2
model_return = ResNet152V2(include_top=False, weights=resnet152v2_weights,
input_shape=input_shape)
# ResNeXt50
elif model_to_use == 'resnext50':
from keras_applications.resnext import ResNeXt50
model_return = ResNeXt50(include_top=False, weights=resnext50_weights,
input_shape=input_shape,
backend=keras.backend,
layers=keras.layers,
models=keras.models,
utils=keras.utils)
# ResNeXt101
elif model_to_use == 'resnext101':
from keras_applications.resnext import ResNeXt101
model_return = ResNeXt101(include_top=False, weights=resnext101_weights,
input_shape=input_shape,
backend=keras.backend,
layers=keras.layers,
models=keras.models,
utils=keras.utils)
# VGG19
elif model_to_use == 'vgg19':
from tensorflow.keras.applications import VGG19
model_return = VGG19(include_top=False, weights=vgg19_weights,
input_shape=input_shape)
# Xception
elif model_to_use == 'xception':
from tensorflow.keras.applications import Xception
model_return = Xception(include_top=False, weights=xception_weights,
input_shape=input_shape)
return(model_return)
###Output
_____no_output_____
###Markdown
5.06 Concatenating Models
###Code
def get_complete_model(model_cnn, model_metadata, verbose=1):
"""
Concatenate multiple models, add hidden layers after concatenation
and return complete concatenated model
*model_cnn: the loaded cnn model object to input
*model_metadata: the loaded metadata model object to input
*verbose: ([0,1]) level of output communication. 0=None, 1=All.
"""
if verbose == 1:
print('Creating complete model...\n')
# Pretrained cnn model with GlobalAveragePooling
model_cnn_base = keras.Sequential([
Model(model_cnn.input, model_cnn.output),
keras.layers.GlobalAveragePooling2D()
])
# Concatenate CNN model with metadata model
model_concat = concatenate([model_cnn_base.output, model_metadata.output], axis=1)
# Output layer
model_concat = keras.layers.Dense(1, activation='sigmoid', name='final_output')(model_concat)
model_complete = Model(inputs=[model_cnn_base.input, model_metadata.input], outputs=model_concat)
return model_complete
###Output
_____no_output_____
###Markdown
5.07 Stratified Group Cross Validation
###Code
def stratified_group_k_fold(X, y, groups, k, seed=SEED):
""" https://www.kaggle.com/jakubwasikowski/stratified-group-k-fold-cross-validation """
labels_num = np.max(y) + 1
y_counts_per_group = defaultdict(lambda: np.zeros(labels_num))
y_distr = Counter()
for label, g in zip(y, groups):
y_counts_per_group[g][label] += 1
y_distr[label] += 1
y_counts_per_fold = defaultdict(lambda: np.zeros(labels_num))
groups_per_fold = defaultdict(set)
def eval_y_counts_per_fold(y_counts, fold):
y_counts_per_fold[fold] += y_counts
std_per_label = []
for label in range(labels_num):
label_std = np.std([y_counts_per_fold[i][label] / y_distr[label] for i in range(k)])
std_per_label.append(label_std)
y_counts_per_fold[fold] -= y_counts
return np.mean(std_per_label)
groups_and_y_counts = list(y_counts_per_group.items())
random.Random(seed).shuffle(groups_and_y_counts)
for g, y_counts in sorted(groups_and_y_counts,
key=lambda x: -np.std(x[1])):
best_fold = None
min_eval = None
for i in range(k):
fold_eval = eval_y_counts_per_fold(y_counts, i)
if min_eval is None or fold_eval < min_eval:
min_eval = fold_eval
best_fold = i
y_counts_per_fold[best_fold] += y_counts
groups_per_fold[best_fold].add(g)
all_groups = set(groups)
for i in range(k):
train_groups = all_groups - groups_per_fold[i]
test_groups = groups_per_fold[i]
train_indices = [i for i, g in enumerate(groups) if g in train_groups]
test_indices = [i for i, g in enumerate(groups) if g in test_groups]
yield train_indices, test_indices
###Output
_____no_output_____
###Markdown
5.08 Train Model
###Code
def get_in_fold_data(kfold, tdx, vdx, read_images_in_fold=read_images_in_fold, loading_bar=False):
"""
*kfold: (int) the current fold in CV
*tdx: (list of ints) train indices for the current fold
*vdx: (list of ints) validation indices for the current fold
*read_images_in_fold: (bool) whether to read the images inside or outside of folds
*loading_bar: (bool) include a loading bar when loading CV images
"""
print('Fetching data...')
# Get values for metadata
X_met, X_met_val, = X_train_df[tdx], X_train_df[vdx]
# Get values for target
y, y_val = y_train[tdx], y_train[vdx]
if read_images_in_fold == False:
# Extract images from full image array
X_met, X_met_val = X_met[:, 1:], X_met_val[:, 1:] # Remove name col
X_met, X_met_val = X_met.astype(np.uint8), X_met_val.astype(np.uint8) # Change np type - must be uint8
# Get values for imgs
X_img = cv2.resize(X_train_img[tdx],
(kfold_params[kfold]['ROWS'], # Row size for current fold
kfold_params[kfold]['COLS'])) # Col size for current fold
X_img_val = cv2.resize(X_train_img[vdx],
(kfold_params[kfold]['ROWS'], # Row size for current fold
kfold_params[kfold]['COLS'])) # Col size for current fold
elif read_images_in_fold == True:
# Read images in from scratch
X_img = np.array(read_jpgs(X_met[:,0], # Img names
rows=kfold_params[kfold]['ROWS'], # Row size for current fold
cols=kfold_params[kfold]['COLS'], # Col size for current fold
loading_bar=loading_bar))
X_img_val = np.array(read_jpgs(X_met_val[:,0], # Img names
rows=kfold_params[kfold]['ROWS'], # Row size for current fold
cols=kfold_params[kfold]['COLS'], # Col size for current fold
loading_bar=loading_bar))
X_met, X_met_val = X_met[:, 1:], X_met_val[:, 1:] # Remove name col
X_met, X_met_val = X_met.astype(np.uint8), X_met_val.astype(np.uint8)
return X_img, X_img_val, X_met, X_met_val, y, y_val
def make_test_augmentations(img):
"""
Returns augmented image(s) and original.
"""
img_augs = np.concatenate((#np.expand_dims(img, axis=0),
np.expand_dims(np.rot90(img, 1), axis=0),
np.expand_dims(np.rot90(img, 2), axis=0),
np.expand_dims(np.rot90(img, 3), axis=0),
np.expand_dims(np.fliplr(img), axis=0),
np.expand_dims(np.fliplr(np.rot90(img, 1)), axis=0),
np.expand_dims(np.fliplr(np.rot90(img, 2)), axis=0),
np.expand_dims(np.fliplr(np.rot90(img, 3)), axis=0)),
axis=0)
return(img_augs)
def train_model(model_to_use=MODEL_TO_USE):
k.clear_session()
skf = stratified_group_k_fold(X=X_train_df, y=y_train, groups=groups, k=KFOLDS, seed=SEED)
rocauc_scores = []
print(f'TRAINING {model_to_use.upper()} ON ' + str(KFOLDS) + ' FOLDS\n')
for fold, (tdx, vdx) in enumerate(skf):
print(f'Fold : {fold}')
print('Img size: ' + str(kfold_params[fold]['ROWS']) + 'x' + str(kfold_params[fold]['ROWS']))
print('Augmentation: ' + str(kfold_params[fold]['AUG']))
print(f'Training on {len(tdx)} samples.')
print(f'Validating on {len(vdx)} samples.')
# Load pretrained model & create name to save weights by
model_cnn = get_cnn_model(kfold=fold, model_to_use=MODEL_TO_USE)
model_save_name = 'weights/' + model_name_save + '/' + model_name_save + '_' + str(fold) + '.h5'
# Fetch in-fold data
X_img, X_img_val, X_met, X_met_val, y, y_val = get_in_fold_data(kfold=fold, tdx=tdx, vdx=vdx)
# Image Preprocessing
#X_img, X_img_val = preprocess_imgs(train_imgs=X_img, test_imgs=X_img_val)
# Image augmentations
X_img, X_met, y = make_train_augmentations(X_img=X_img,
X_met=X_met,
y=y,
p=0.4,
aug=kfold_params[fold]['AUG'])
# CONCATENATED MODEL - Edit below
model = get_complete_model(model_cnn=model_cnn,
model_metadata=model_metadata)
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=METRICS)
# Define learning rate schedule
lr = LearningRateScheduler(lrfn, verbose=True)
# Define early stopping parameters
es = EarlyStopping(monitor='val_auc',
mode='max',
restore_best_weights=True,
verbose=1,
patience=3)
# Define model checkpoint parameters
mc = ModelCheckpoint(filepath=model_save_name,
save_best_only=True,
save_weights_only=True,
monitor='val_auc',
mode='max',
verbose=0)
# Fit model
model.fit([X_img, X_met], y,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
callbacks = [es, lr, mc],
class_weight=class_weight,
verbose=1,
validation_split=0.25)
del [X_img, X_met, y]
# TTAs and validation predictions
print('\nMaking val predictions')
preds = []
for val_idx in range(len(X_img_val)):
# Add augmented images to each img in X_img_val
X_img_val_augs = np.concatenate((np.expand_dims(X_img_val[val_idx], axis=0),
make_test_augmentations(X_img_val[val_idx])))
# Add copies of each corresponding X_met_val for the augmented imgs
X_met_val_augs = np.array([X_met_val[val_idx]] * len(X_img_val_augs))
# Make prediction for each record
pred = model.predict([X_img_val_augs, X_met_val_augs])
pred = np.mean(pred, axis=0)
preds.append(pred)
# Calculate OOF ROCAUC following TTAs
oof_rocauc = metrics.roc_auc_score(y_val, preds)
print('')
print('\nFold ' + str(fold) + ' ROCAUC: ' + str(oof_rocauc))
print('')
rocauc_scores.append(oof_rocauc)
# Clean up
del [X_img_val, X_met_val, y_val, pred, tdx, vdx, model, oof_rocauc]
gc.collect()
print('\n\n############################')
print('Mean OOF ROCAUC: '+ str(np.mean(rocauc_scores))+' (±'+str(round(np.std(rocauc_scores), 5))+')')
print('############################\n\n')
return(rocauc_scores)
rocauc_scores = train_model(model_to_use=MODEL_TO_USE)
# Save the fold results
rocauc_scores = pd.DataFrame({'rocauc':rocauc_scores})
rocauc_scores_name = f'scores/{model_name_save}_scores.csv'
rocauc_scores.to_csv(rocauc_scores_name, index=False)
print(f'--------------\nFOLD SCORES\n--------------\n{rocauc_scores}')
print(f'\n--------------\nFOLD STATS\n--------------\n{rocauc_scores.describe()}')
plt.plot(rocauc_scores.index, rocauc_scores, marker='.')
plt.title('ROCAUC Fold Results')
plt.xlabel('Fold')
plt.ylabel('ROCAUC')
plt.show()
###Output
_____no_output_____
###Markdown
6.00 Testing 6.01 Test metadata
###Code
# Clean up memory
try:
del [X_train_img, X_train_df, y_train]
except:
pass
test_df = pd.read_csv(test_metadata_path)
duplicates = pd.read_csv('2020_Challenge_duplicates.csv')
# Replace whitespace in anatom_site_general_challenge with underscore
test_df['anatom_site_general_challenge'] = test_df['anatom_site_general_challenge'].replace(' ', '_', regex=True)
# Encode sex feature
test_df = test_df.merge(pd.get_dummies(test_df[['sex','anatom_site_general_challenge']]),
left_index=True, right_index=True)
test_df.drop(['patient_id','sex', 'anatom_site_general_challenge'], axis=1, inplace=True)
# Remove duplicates
test_df[(~test_df['image_name'].isin(duplicates['ISIC_id_paired']))]
test_df = np.asarray(test_df)
del duplicates
test_df
###Output
_____no_output_____
###Markdown
6.02 Test CNN Model
###Code
def import_model(kfolds, model_to_use=MODEL_TO_USE):
"""
*kfolds: list object of applicable folds. (Not an int)
"""
models = []
for fold in tqdm(range(kfolds)):
model_cnn = get_cnn_model(kfold=fold, model_to_use=MODEL_TO_USE, verbose=0)
model = get_complete_model(model_cnn=model_cnn,
model_metadata=model_metadata,
verbose=0)
model.load_weights('../output/weights/' + model_name_save + '/' + model_name_save + '_' + str(fold) + '.h5')
models.append(model)
return(models)
###Output
_____no_output_____
###Markdown
6.03 Test Augmention Pipeline
###Code
def make_test_augmentations(img):
"""
Returns augmented image(s) and original.
"""
img_augs = np.concatenate((np.expand_dims(img, axis=0),
np.expand_dims(np.rot90(img, 1), axis=0),
np.expand_dims(np.rot90(img, 2), axis=0),
np.expand_dims(np.rot90(img, 3), axis=0),
np.expand_dims(np.fliplr(img), axis=0),
np.expand_dims(np.fliplr(np.rot90(img, 1)), axis=0),
np.expand_dims(np.fliplr(np.rot90(img, 2)), axis=0),
np.expand_dims(np.fliplr(np.rot90(img, 3)), axis=0)),
axis=0)
return(img_augs)
###Output
_____no_output_____
###Markdown
6.04 Make Submission
###Code
def make_submission(test_df):
# Read images in and predict
preds_test = [] # We'll store the final prediction for each image here
# Convert pixel values to float
#print('Preparing image standardiser...')
#train_imgs = train_imgs.astype(float)
# Get per-channel means and stds
#train_means = train_imgs.reshape(-1, train_imgs.shape[-1]).mean(axis=0)
#train_stds = train_imgs.reshape(-1, train_imgs.shape[-1]).std(axis=0)
print('Getting models...')
time.sleep(2)
# Retrieve model to use - as per fold image sizes
models = import_model(kfolds=KFOLDS, model_to_use=MODEL_TO_USE)
print('Generating predictions...')
time.sleep(2)
# Loop through all the test images
for image_row in tqdm(test_df):
# Get image data from dicom file
image_path = os.path.join(test_img_path, image_row[0]) + '.jpg'
# Read the dcm image in
image = cv2.imread(image_path)
# Drop image name from metadata
image_row = np.delete(image_row, 0).astype(np.uint8)
image_row = np.expand_dims(image_row, axis=0)
# AUGMENTATIONS
images_all = make_test_augmentations(image)
pred_proba_list = []
for image in images_all:
image = np.expand_dims(image, axis=0)
pred_proba = np.mean([model.predict([image, image_row]) for model in models], axis=0)
pred_proba_list.append(pred_proba)
pred_proba = np.mean(pred_proba_list, axis=0)
preds_test.append(pred_proba.tolist()[0][0])
# Create submission df
submission = pd.DataFrame({sample_sub.columns[0]:test_df[:,1],
sample_sub.columns[1]:preds_test})
return(submission)
# Create submission
submission = make_submission(test_df=test_df)
submission_name = f'submissions/{model_name_save}_submission.csv'
submission.to_csv(submission_name, index=False)
submission.head()
del submission
# Some definitions going forward
ROWS = 512 # Default row size
COLS = 512 # Default col size
CHANNELS = 3
EPOCHS = 8
BATCH_SIZE = 8
CLASSES = 2
# Read all images in and subset in CV, or Read images inside each fold in CV
read_images_in_fold = True
# -- Models ran --
#MODEL_TO_USE = 'densenet201'
#MODEL_TO_USE = 'inception_resnetv2'
#MODEL_TO_USE = 'xception'
#MODEL_TO_USE = 'inceptionv3'
# -- Submissions generated --
MODEL_TO_USE = 'vgg19'
# -- Staged for running --
#MODEL_TO_USE = 'efficientnet_b5'
####MODEL_TO_USE = 'resnext101'
#MODEL_TO_USE = 'resnet152v2'
####MODEL_TO_USE = 'efficientnet_b0'
####MODEL_TO_USE = 'efficientnet_b1'
####MODEL_TO_USE = 'efficientnet_b2'
####MODEL_TO_USE = 'efficientnet_b3'
####MODEL_TO_USE = 'efficientnet_b4'
####MODEL_TO_USE = 'densenet169'
####MODEL_TO_USE = 'densenet121'
####MODEL_TO_USE = 'resnet50v2'
####MODEL_TO_USE = 'resnet101v2'
####MODEL_TO_USE = 'resnext50'
# Parameters for each fold
# standard_models = [128, 256, 384, 512]
# efficient_nets = [224, 240, 260, 300, 380, 456]
kfold_params = {
0: {'ROWS':ROWS,'COLS':COLS,'AUG':'fliplr'},
1: {'ROWS':ROWS,'COLS':COLS,'AUG':'rot90' },
2: {'ROWS':ROWS,'COLS':COLS,'AUG':'rot180'},
3: {'ROWS':ROWS,'COLS':COLS,'AUG':'rot270'},
4: {'ROWS':ROWS,'COLS':COLS,'AUG':'fliplr'},
5: {'ROWS':ROWS,'COLS':COLS,'AUG':'rot90' },
6: {'ROWS':ROWS,'COLS':COLS,'AUG':'rot180'},
7: {'ROWS':ROWS,'COLS':COLS,'AUG':'rot270'}
}
KFOLDS = len(kfold_params)
SEED = 14
np.random.seed(SEED)
model_name_save = MODEL_TO_USE + '_' + str(ROWS) + 'x' + str(COLS) + '_seed' + str(SEED)
# Create weights path if does not exist already
if not os.path.exists(f'weights/{model_name_save}'):
os.mkdir(f'weights/{model_name_save}')
print(f'Model name: {model_name_save}')
# Create submission
submission = make_submission(test_df=test_df)
submission_name = f'submissions/{model_name_save}_submission.csv'
submission.to_csv(submission_name, index=False)
submission.head()
###Output
_____no_output_____
###Markdown
TrainFollowing from [Preprocessing](https://github.com/TheNerdyCat/deepfake-detection-challenge/blob/master/output/preprocessing.ipynb), this stage will look at data augmentation and subsequently training the model.First we will undersample the images to balance REAL and FAKE images in both the train and validation sets. There are actually more FAKE images than REAL in this dataset, so this will be addressed accordingly.We will read our extracted faces using OpenCV and perform any data augmentation. Following this, we will define X and X_test. Then we'll read the metadata to label the extracted faces as FAKE or REAL, defining them into y and y_test.After we have our training data and validation data ready and shuffled, we'll train our model.
###Code
import pandas as pd
import numpy as np
import os
import json # To read the metadata
import tensorflow as tf
from tensorflow import keras
from tensorflow.python.keras import backend as k
from tensorflow.keras import layers
from tensorflow.keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D
from tensorflow.keras.layers import LeakyReLU
from tensorflow.keras.models import Model, load_model
from tensorflow.keras.initializers import glorot_uniform
from tensorflow.keras.callbacks import Callback, EarlyStopping
#import torch
#import keras
#from keras import Model, Sequential
#from keras.layers import *
#from keras.optimizers import *
#from keras.callbacks import LearningRateScheduler
import cv2
from sklearn.model_selection import KFold
from sklearn.metrics import log_loss
from tqdm.notebook import tqdm
import random
import gc
import warnings
warnings.filterwarnings("ignore")
#tf.debugging.set_log_device_placement(True) # Enable GPU logging
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
train_images_path = '../input/train_images/'
train_images = os.listdir(train_images_path)
metadata_path = '../input/train_metadata/'
metadata_dir = os.listdir(metadata_path)
# Read in all the metadata files to make one inclusive dict
metadata = {}
for i, file in enumerate(metadata_dir):
with open('../input/train_metadata/' + file) as json_file:
metadata = {**metadata, **json.load(json_file)}
X_paths = []
for img in train_images:
img = train_images_path + img
X_paths.append(img)
y = []
for label in train_images:
if metadata[label.split('_')[0] + '.mp4']['label'] == 'REAL':
y.append(0)
else:
y.append(1)
def shuffle(X, y):
new_train = []
for m, n in zip(X, y):
new_train.append([m, n])
random.shuffle(new_train)
X, y = [], []
for x in new_train:
X.append(x[0])
y.append(x[1])
return X, y
X_paths, y = shuffle(X_paths, y)
# Create X_test from 10% of X
X_test_paths = X_paths[:round(len(X_paths) / 100 * 25)]
X_paths = X_paths[round(len(X_paths) / 100 * 25):]
# Create y_test from 10% of y
y_test = y[:round(len(y) / 100 * 25)]
y = y[round(len(y) / 100 * 25):]
X_paths, y = shuffle(X_paths, y)
X_test_paths, y_test = shuffle(X_test_paths, y_test)
print('There are ' + str(y.count(1)) + ' fake train samples')
print('There are ' + str(y.count(0)) + ' real train samples')
print('There are ' + str(y_test.count(1)) + ' fake test samples')
print('There are ' + str(y_test.count(0)) + ' real test samples')
###Output
_____no_output_____
###Markdown
UndersamplingNext we'll balance our data, using undersampling techniques. Source for this method can be found [here](https://www.kaggle.com/unkownhihi/starter-kernel-with-cnn-model-ll-lb-0-69235Apply-Underbalancing-Techinique)
###Code
real = []
fake = []
for m, n in zip(X_paths, y):
if n == 0:
real.append(m)
else:
fake.append(m)
fake = random.sample(fake, len(real))
X_paths, y = [], []
for x in real:
X_paths.append(x)
y.append(0)
for x in fake:
X_paths.append(x)
y.append(1)
real = []
fake = []
for m, n in zip(X_test_paths, y_test):
if n == 0:
real.append(m)
else:
fake.append(m)
fake = random.sample(fake, len(real))
X_test_paths, y_test = [], []
for x in real:
X_test_paths.append(x)
y_test.append(0)
for x in fake:
X_test_paths.append(x)
y_test.append(1)
X_paths, y = shuffle(X_paths, y)
X_test_paths, y_test = shuffle(X_test_paths, y_test)
print('There are ' + str(y.count(1)) + ' fake train samples')
print('There are ' + str(y.count(0)) + ' real train samples')
print('There are ' + str(y_test.count(1)) + ' fake test samples')
print('There are ' + str(y_test.count(0)) + ' real test samples')
###Output
_____no_output_____
###Markdown
Data AugmentationData augmentation will go here
###Code
ROWS = 256
COLS = 256
CHANNELS = 3
CLASSES = 2
def read_image(file_path):
img = cv2.imread(file_path, cv2.IMREAD_COLOR)
return cv2.resize(img, (ROWS, COLS), interpolation=cv2.INTER_CUBIC)
def prepare_data(images):
m = len(images)
X = np.zeros((m, ROWS, COLS, CHANNELS), dtype=np.uint8)
y = np.zeros((1, m), dtype=np.uint8)
for i, image_file in enumerate(images):
X[i,:] = read_image(image_file)
if metadata[image_file.split('/')[3].split('_')[0]+'.mp4']['label'] == 'REAL':
y[0, i] = 1
elif metadata[image_file.split('/')[3].split('_')[0]+'.mp4']['label'] == 'FAKE':
y[0, i] = 0
return X, y
def convert_to_one_hot(Y, C):
Y = np.eye(C)[Y.reshape(-1)].T
return Y
train_set_x, train_set_y = prepare_data(X_paths)
test_set_x, test_set_y = prepare_data(X_test_paths)
X_train = train_set_x / 255
X_test = test_set_x / 255
Y_train = convert_to_one_hot(train_set_y, CLASSES).T
Y_test = convert_to_one_hot(test_set_y, CLASSES).T
print ("Number of training examples =", X_train.shape[0])
print ("Number of test examples =", X_test.shape[0])
print ("X_train shape:", X_train.shape)
print ("Y_train shape:", Y_train.shape)
print ("X_test shape:", X_test.shape)
print ("Y_test shape:", Y_test.shape)
###Output
_____no_output_____
###Markdown
As per the DFDC research paper, we apply the following augmentation techniques: - ~~1/3 of the videos I kept unchanged~~ - ~~2/9 of the videos I resized to 1/4 of their sizes~~ - 2/9 of the videos I reduced FPS to 15 - 2/9 of the videos I applied a hard compressionI suspect the key is the last bullet: apply a hard compression. This reduces the videos' file sizes to <1/10 of their original sizes, and make it much harder for our algos to correctly classify as fake or real.**IMPORTANT**: I made sure these 4 proportions are respected in both training and validation sets.
###Code
def resize_images(X, size=4):
"""
Resizes images, then resizes again back to original size
"""
for img in X:
img = cv2.resize(img, (int(ROWS / size), int(COLS / size)))
img = cv2.resize(img, (int(ROWS), int(COLS)))
return X
def apply_img_function(X, func, proportion, seed=123):
"""
Extracts sample from images array and applies function given
"""
np.random.seed(seed)
idxs = np.random.choice(X.shape[0], int(len(X)*proportion), replace=False)
X_sample = X[idxs]
X_sample_applied = func(X_sample)
X[idxs] = X_sample_applied
return X
X_train = apply_img_function(X_train, func=resize_images, proportion=1/3, seed=14)
X_test = apply_img_function(X_test, func=resize_images, proportion=1/3, seed=14)
###Output
_____no_output_____
###Markdown
ModellingWe implement our ResNet using Keras.
###Code
def identity_block(X, f, filters, stage, block):
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value. We'll need this later to add back to the main path.
X_shortcut = X
# First component of main path
X = Conv2D(filters=F1, kernel_size=(1, 1), strides=(1,1), padding='valid', name=conv_name_base + '2a', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2a')(X)
X = Activation('relu')(X)
# Second component of main path
X = Conv2D(filters=F2, kernel_size=(f, f), strides=(1,1), padding='same', name=conv_name_base + '2b', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2b')(X)
X = Activation('relu')(X)
# Third component of main path
X = Conv2D(filters=F3, kernel_size=(1, 1), strides=(1,1), padding='valid', name=conv_name_base + '2c', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2c')(X)
# Final step: Add shortcut value to main path, and pass it through a RELU activation
X = Add()([X, X_shortcut])
X = Activation('relu')(X)
return X
def convolutional_block(X, f, filters, stage, block, s=2):
# defining name basis
conv_name_base='res' + str(stage) + block + '_branch'
bn_name_base='bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value
X_shortcut = X
##### MAIN PATH #####
# First component of main path
X = Conv2D(F1, (1, 1), strides=(s,s), name=conv_name_base + '2a', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2a')(X)
X = Activation('relu')(X)
# Second component of main path
X = Conv2D(filters=F2, kernel_size=(f, f), strides=(1, 1), padding='same', name=conv_name_base + '2b', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2b')(X)
X = Activation('relu')(X)
# Third component of main path
X = Conv2D(filters=F3, kernel_size=(1, 1), strides=(1, 1), padding='valid', name=conv_name_base + '2c', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2c')(X)
##### SHORTCUT PATH ####
X_shortcut = Conv2D(F3, (1, 1), strides=(s,s), name = conv_name_base + '1', kernel_initializer=glorot_uniform(seed=0))(X_shortcut)
X_shortcut = BatchNormalization(axis=3, name=bn_name_base + '1')(X_shortcut)
# Final step: Add shortcut value to main path, and pass it through a RELU activation
X = Add()([X, X_shortcut])
X = Activation('relu')(X)
return X
def ResNet50(input_shape = (256, 256, 3), classes=2):
# Define the input as a tensor with shape input_shape
X_input = Input(input_shape)
# Zero-Padding
X = ZeroPadding2D((3, 3))(X_input)
# Stage 1
X = Conv2D(64, (7, 7), strides=(2, 2), name='conv1', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name='bn_conv1')(X)
X = Activation('relu')(X)
X = MaxPooling2D((3, 3), strides=(2, 2))(X)
# Stage 2
X = convolutional_block(X, f=3, filters=[256, 256, 1024], stage=2, block='a', s=1)
X = identity_block(X, 3, [256, 256, 1024], stage=2, block='b')
X = identity_block(X, 3, [256, 256, 1024], stage=2, block='c')
# Stage 3
X = convolutional_block(X, f=3, filters=[512, 512, 2048], stage=3, block='a', s=2)
X = identity_block(X, 3, [512, 512, 2048], stage=3, block='b')
X = identity_block(X, 3, [512, 512, 2048], stage=3, block='c')
X = identity_block(X, 3, [512, 512, 2048], stage=3, block='d')
# Stage 4
X = convolutional_block(X, f=3, filters=[1024, 1024, 4096], stage=4, block='a', s=2)
X = identity_block(X, 3, [1024, 1024, 4096], stage=4, block='b')
X = identity_block(X, 3, [1024, 1024, 4096], stage=4, block='c')
X = identity_block(X, 3, [1024, 1024, 4096], stage=4, block='d')
X = identity_block(X, 3, [1024, 1024, 4096], stage=4, block='e')
X = identity_block(X, 3, [1024, 1024, 4096], stage=4, block='f')
# Stage 5
X = convolutional_block(X, f=3, filters=[2048, 2048, 8192], stage=5, block='a', s=2)
X = identity_block(X, 3, [2048, 2048, 8192], stage=5, block='b')
X = identity_block(X, 3, [2048, 2048, 8192], stage=5, block='c')
# AVGPOOL.
X = AveragePooling2D((2, 2), name='avg_pool')(X)
# output layer
X = Flatten()(X)
X = Dense(classes, activation='softmax', name='fc' + str(classes), kernel_initializer=glorot_uniform(seed=0))(X)
# Create model
model = Model(inputs=X_input, outputs=X, name='ResNet50')
return model
kfolds = 5
kf = KFold(n_splits=kfolds)
losses = []
for fold, (tdx, vdx) in enumerate(kf.split(X_train, Y_train)):
print(f'Fold : {fold}')
X, X_val, Y, Y_val = X_train[tdx], X_train[vdx], Y_train[tdx], Y_train[vdx]
model = ResNet50(input_shape=(256, 256, 3), classes=2)
model.compile(optimizer='adam', loss='binary_crossentropy')
es = EarlyStopping(monitor='loss',
mode='min',
restore_best_weights=True,
verbose=2,
patience=10)
model.fit(X_train, Y_train, callbacks=[es], epochs=10, batch_size=64, verbose=1)
pred = model.predict([X_val])
loss = log_loss(Y_val, pred)
model.save_weights(f'resnet50_{fold}.h5')
print('')
print('Fold ' + str(fold) + ' log loss: ' + str(loss))
print('')
losses.append(loss)
gc.collect()
print(np.mean(losses))
preds = model.evaluate(X_test, Y_test, verbose=0)
print ("Loss = " + str(preds))
kfolds = 5
# Import the weights of our model
models = []
for i in range(kfolds):
model = ResNet50(input_shape=(64, 64, 3), classes=2)
model.load_weights(f'../output/resnet50_{i}.h5')
models.append(model)
np.mean([model.predict(X_test) for model in models], axis=0)
###Output
_____no_output_____ |
utilities/Make JSONs.ipynb | ###Markdown
---Filter out the teachers / nan stuff
###Code
not_yet_graduated = ['Grady']
website_info = website_info[
~(
(website_info['Last Name'] == 'Test')
| website_info['First Name'].isnull()
| website_info['First Name'].isin(not_yet_graduated)
)
]
website_info.head(20)
###Output
_____no_output_____
###Markdown
---Rename the columns to something that can be ingested by the js
###Code
column_map = {
'First Name': 'firstName',
'Last Name': 'lastName',
'Tagline': 'reelThemIn',
'Bio': 'bio',
'GitHub': 'github',
'LinkedIn': 'linkedin',
'Is Job Searching': 'job_searching',
'Website Portfolio': 'portfolio',
'Capstone Video': 'capstoneVideo',
'Podcast iframe': 'podcast'
}
website_info = website_info.rename(column_map, axis=1)
website_info = website_info[column_map.values()]
website_info.columns
###Output
_____no_output_____
###Markdown
---Make the paragraphs
###Code
def html_paragraph(v):
if pd.isna(v):
return ""
return str(v)
website_info['reelThemIn'] = website_info['reelThemIn'].apply(html_paragraph)
website_info['bio'] = website_info['bio'].apply(html_paragraph)
###Output
_____no_output_____
###Markdown
---Clean up some excess characters
###Code
for col in website_info.columns:
try:
website_info[col] = website_info[col] \
.str.replace('“', '"') \
.str.replace('”', '"') \
.str.replace("’", "'") \
.str.replace("—", "-")
except:
pass
###Output
_____no_output_____
###Markdown
---Gather the image paths
###Code
def image_path(name, idx):
return f"../assets/img/resized/{name.lower()}{idx}.jpeg"
website_info['proImg'] = website_info['firstName'].apply(image_path, idx=1)
website_info['funImg'] = website_info['firstName'].apply(image_path, idx=2)
website_info['proImg'].head()
###Output
_____no_output_____
###Markdown
---Resume paths
###Code
def resume_path(name):
path = f"../assets/resume/{name.lower()}.pdf"
if os.path.exists(path):
return path
website_info['resume'] = website_info['firstName'].apply(resume_path)
website_info[['firstName', 'resume']]
###Output
_____no_output_____
###Markdown
---Assign Ids
###Code
website_info['id'] = website_info.index + 1
###Output
_____no_output_____
###Markdown
---Take a peek
###Code
website_info.head(1)
website_info.to_json("./../data/cohort.json", orient='records')
###Output
_____no_output_____ |
docs/Tutorial/logi_and_multiclass.ipynb | ###Markdown
Logistic Regression and Multinomial ExtensionWe would like to use an example to show how the best subset selection for logistic regression work in our program. Titanic DatasetConsider the Titanic dataset obtained from the Kaggle competition: https://www.kaggle.com/c/titanic/data. The dataset consists of data about 889 passengers, and the goal of the competition is to predict the survival (yes/no) based on features including the class of service, the sex, the age etc.
###Code
import numpy as np
import pandas as pd
import csv
dt = pd.read_csv("./train.csv")
print(dt.head(5))
###Output
PassengerId Survived Pclass \
0 1 0 3
1 2 1 1
2 3 1 3
3 4 1 1
4 5 0 3
Name Sex Age SibSp \
0 Braund, Mr. Owen Harris male 22.0 1
1 Cumings, Mrs. John Bradley (Florence Briggs Th... female 38.0 1
2 Heikkinen, Miss. Laina female 26.0 0
3 Futrelle, Mrs. Jacques Heath (Lily May Peel) female 35.0 1
4 Allen, Mr. William Henry male 35.0 0
Parch Ticket Fare Cabin Embarked
0 0 A/5 21171 7.2500 NaN S
1 0 PC 17599 71.2833 C85 C
2 0 STON/O2. 3101282 7.9250 NaN S
3 0 113803 53.1000 C123 S
4 0 373450 8.0500 NaN S
###Markdown
We only focus on some numeric or classification variables:- predictor variables: $Pclass,\ Sex,\ Age,\ SibSp,\ Parch,\ Fare,\ Embarked$;- response variable is $Survived$.
###Code
dt = dt.iloc[:, [1,2,4,5,6,7,9,11]] # variables interested
dt['Pclass'] = dt['Pclass'].astype(str)
print(dt.head(5))
###Output
Survived Pclass Sex Age SibSp Parch Fare Embarked
0 0 3 male 22.0 1 0 7.2500 S
1 1 1 female 38.0 1 0 71.2833 C
2 1 3 female 26.0 0 0 7.9250 S
3 1 1 female 35.0 1 0 53.1000 S
4 0 3 male 35.0 0 0 8.0500 S
###Markdown
However, some rows contain missing value (NaN) and we need to drop them.
###Code
dt = dt.dropna()
print('sample size: ', dt.shape)
###Output
sample size: (712, 8)
###Markdown
Then use dummy variables to replace classification variables:
###Code
dt1 = pd.get_dummies(dt)
print(dt1.head(5))
###Output
Survived Age SibSp Parch Fare Pclass_1 Pclass_2 Pclass_3 \
0 0 22.0 1 0 7.2500 0 0 1
1 1 38.0 1 0 71.2833 1 0 0
2 1 26.0 0 0 7.9250 0 0 1
3 1 35.0 1 0 53.1000 1 0 0
4 0 35.0 0 0 8.0500 0 0 1
Sex_female Sex_male Embarked_C Embarked_Q Embarked_S
0 0 1 0 0 1
1 1 0 1 0 0
2 1 0 0 0 1
3 1 0 0 0 1
4 0 1 0 0 1
###Markdown
Now we split `dt1` into training set and testing set:
###Code
from sklearn.model_selection import train_test_split
X = np.array(dt1.drop('Survived', axis = 1))
Y = np.array(dt1.Survived)
train_x, test_x, train_y, test_y = train_test_split(X, y, test_size = 0.33, random_state = 0)
print('train size: ', train_x.shape[0])
print('test size:', test_x.shape[0])
###Output
train size: 477
test size: 235
###Markdown
Here `train_x` contains:- V0: dummy variable, 1st ticket class (1-yes, 0-no)- V1: dummy variable, 2nd ticket class (1-yes, 0-no)- V2: dummy variable, sex (1-male, 0-female)- V3: Age- V4: of siblings / spouses aboard the Titanic- V5: of parents / children aboard the Titanic- V6: Passenger fare- V7: dummy variable, Cherbourg for embarkation (1-yes, 0-no)- V8: dummy variable, Queenstown for embarkation (1-yes, 0-no)And `train_y` indicates whether the passenger survived (1-yes, 0-no).
###Code
print('train_x:\n', train_x[0:5, :])
print('train_y:\n', train_y[0:5])
###Output
train_x:
[[54. 1. 0. 59.4 1. 0. 0. 1. 0.
1. 0. 0. ]
[30. 0. 0. 8.6625 0. 0. 1. 1. 0.
0. 0. 1. ]
[47. 0. 0. 38.5 1. 0. 0. 0. 1.
0. 0. 1. ]
[28. 2. 0. 7.925 0. 0. 1. 0. 1.
0. 0. 1. ]
[29. 1. 0. 26. 0. 1. 0. 1. 0.
0. 0. 1. ]]
train_y:
[1 0 0 0 1]
###Markdown
Best Subset Selection for Logistic RegressionThe `abessLogistic()` function in the `abess.linear` allows you to perform best subset selection in a highly efficient way. For example, in the Titanic sample, if you want to look for a best subset with no more than 5 variables on the logistic model, you can call:
###Code
from abess.linear import abessLogistic
s = 5 # target sparsity
model = abessLogistic(support_size = range(0, s + 1))
model.fit(train_x, train_y)
###Output
_____no_output_____
###Markdown
Now the `model.coef_` contains the coefficients of logistic model with no more than 5 variables. That is, those variables with a coefficient 0 is unused in the model:
###Code
print(model.coef_)
###Output
[-0.05410776 -0.53642966 0. 0. 1.74091231 0.
-1.26223831 2.7096497 0. 0. 0. 0. ]
###Markdown
By default, the `abessLogistic` function set the `support_size = range(0, min(p,n/log(n)p)` and the best support size is determined by theExtended Bayesian Information Criteria (EBIC). You can change the tunging criterion by specifying the argument `ic_type`. The available tuning criterion now are `gic`, `aic`, `bic`, `ebic`. For a quicker solution, you can change the tuning strategy to a golden section path which trys to find the elbow point of the tuning criterion over the hyperparameter space. Here we give an example.
###Code
model_gs = abessLogistic(path_type = "pgs", s_min = 0, s_max = s)
model_gs.fit(train_x, train_y)
print(model_gs.coef_)
###Output
[-0.05410776 -0.53642966 0. 0. 1.74091231 0.
-1.26223831 2.7096497 0. 0. 0. 0. ]
###Markdown
where `s_min` and `s_max` bound the support size and this model give the same answer as before. Interpret the ResultAfter fitting with `model.fit()`, we can further do more exploring work to interpret it. As we show above, `model.coef_` contains the sparse coefficients of variables and those non-zero values indicates "important" varibles chosen in the model.
###Code
print('Intercept: ', model.intercept_)
print('coefficients: \n', model.coef_)
print('Used variables\' index:', np.nonzero(model.coef_ != 0)[0])
###Output
Intercept: [0.57429775]
coefficients:
[-0.05410776 -0.53642966 0. 0. 1.74091231 0.
-1.26223831 2.7096497 0. 0. 0. 0. ]
Used variables' index: [0 1 4 6 7]
###Markdown
The training loss and the score under information criterion:
###Code
print('Training Loss: ', model.train_loss_)
print('IC: ', model.ic_)
###Output
Training Loss: [204.35270048]
IC: [464.39204991]
###Markdown
Make a PredictionPrediction is allowed for the estimated model. Just call `model.predict()` function like:
###Code
fitted_y = model.predict(test_x)
print(fitted_y)
###Output
[0. 0. 1. 0. 0. 0. 1. 0. 0. 1. 1. 1. 1. 0. 0. 1. 0. 1. 0. 0. 0. 1. 1. 0.
1. 0. 0. 0. 1. 0. 0. 0. 0. 1. 1. 1. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.
0. 1. 1. 1. 1. 0. 1. 1. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 1. 0. 1. 1. 0. 1.
1. 1. 0. 1. 0. 0. 0. 0. 1. 1. 0. 1. 1. 0. 0. 0. 1. 0. 0. 0. 1. 1. 1. 0.
1. 1. 0. 0. 0. 1. 0. 0. 0. 0. 1. 0. 0. 0. 0. 1. 1. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 1. 0. 0. 0. 0. 1. 1. 0. 1. 1. 1. 1. 1. 1. 0. 0. 0. 0. 0. 1. 1.
1. 0. 0. 0. 0. 0. 0. 0. 1. 1. 1. 0. 0. 1. 1. 1. 1. 0. 1. 1. 0. 0. 0. 0.
0. 0. 1. 1. 1. 0. 1. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 1. 1. 0. 1.
1. 0. 1. 1. 0. 0. 1. 1. 0. 1. 0. 1. 0. 0. 0. 0. 0. 0. 1. 1. 1. 1. 0. 0.
1. 1. 0. 1. 1. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
###Markdown
Besides, you can also call for the survival probability of each observation by `model.predict_proba()`. Actually, those who with a probability greater than 0.5 is classified to "1" (survived).
###Code
fitted_p = model.predict_proba(test_x)
print(fitted_p)
###Output
[0.49256613 0.25942968 0.84928463 0.20204183 0.03801548 0.04022349
0.72351443 0.23115622 0.23115622 0.66834673 0.96775535 0.64905946
0.98461921 0.15238867 0.25004079 0.57640212 0.26995968 0.71264582
0.37791835 0.1771314 0.25773297 0.75392142 0.87974411 0.40251569
0.56441882 0.34057869 0.22005156 0.067159 0.57880531 0.33647767
0.15655122 0.02682661 0.14553043 0.69663788 0.89078445 0.87925152
0.91926004 0.59081387 0.42997279 0.45653474 0.38846964 0.09020182
0.05742461 0.07773719 0.0994852 0.11006334 0.9819574 0.14219863
0.1096089 0.96940171 0.71351188 0.69663788 0.63663757 0.25942968
0.54978583 0.53309793 0.07032472 0.0706292 0.86889888 0.37901167
0.43876674 0.03084541 0.14553043 0.19993615 0.29180956 0.11828599
0.94586145 0.30610513 0.98763221 0.80911714 0.25942968 0.93051703
0.9097025 0.51285362 0.04924417 0.53765354 0.48242039 0.26040948
0.09474175 0.3384564 0.55107315 0.88025271 0.09058398 0.81733446
0.86836852 0.09474175 0.04461544 0.28075505 0.78890012 0.13893026
0.02434171 0.04697945 0.70146853 0.91404969 0.66232291 0.0994852
0.93719603 0.8422183 0.1096089 0.15469685 0.15238867 0.85879022
0.22005156 0.24091195 0.21168044 0.15238867 0.60493878 0.32644935
0.26125213 0.07517093 0.13893026 0.74034636 0.84746075 0.45213182
0.0706292 0.25942968 0.22005156 0.01835698 0.14163263 0.20211369
0.15238867 0.09990237 0.23918546 0.73072611 0.26215016 0.03608545
0.03870124 0.16253688 0.74034636 0.97993672 0.08170611 0.64073592
0.84033393 0.85210036 0.80983396 0.97257783 0.63663757 0.01819022
0.04521358 0.11500215 0.35283318 0.0604244 0.80983396 0.65427173
0.56441882 0.21090587 0.09020182 0.15238867 0.09205769 0.13258298
0.07032472 0.10443874 0.67329436 0.91047691 0.87141113 0.13258298
0.13893026 0.69001575 0.9854175 0.74034636 0.95157309 0.09990237
0.97884484 0.51066947 0.04441775 0.04441775 0.28361352 0.03487023
0.49488971 0.1178021 0.64073592 0.62512052 0.97884484 0.0706292
0.50493039 0.62403068 0.86836852 0.13893026 0.17455761 0.3031159
0.07773719 0.37901167 0.11778441 0.4701259 0.40262288 0.9369219
0.17455761 0.16689812 0.66640667 0.87338811 0.24261599 0.58525135
0.76060241 0.09058398 0.958343 0.72981059 0.30511879 0.29180956
0.77425595 0.96775535 0.0858588 0.86836852 0.03084541 0.71900957
0.08726302 0.05295266 0.34866263 0.32853374 0.034404 0.15950977
0.91085503 0.52533827 0.80136124 0.55222273 0.07394554 0.24917023
0.76475846 0.73431446 0.27182894 0.8976234 0.67329436 0.04441775
0.30124969 0.97648392 0.16253688 0.14892722 0.02069282 0.28267012
0.05742461 0.05012194 0.12648308 0.06745077 0.08275843 0.09020182
0.067159 ]
###Markdown
We can also generate an ROC curve and calculate tha AUC value. On this dataset, the AUC is 0.817, which is quite close to 1.
###Code
from sklearn.metrics import roc_curve, auc
import matplotlib.pyplot as plt
fpr, tpr, _ = roc_curve(test_y, fitted_p)
plt.plot(fpr, tpr)
plt.plot([0, 1], [0, 1], 'k--')
plt.show()
print('AUC: ', auc(fpr, tpr))
###Output
_____no_output_____
###Markdown
Extension: Multi-class Classification Best subset selection for multinomial logistic regressionWhen the number of classes is more than 2, we call it multi-class classification task. Logistic regression can be extended to model several classes of events such as determining whether an image contains a cat, dog, lion, etc. Each object being detected in the image would be assigned a probability between 0 and 1, with a sum of one. The extended model is multinomial logistic regression.To arrive at the multinomial logistic model, one can imagine, for $K$ possible classes, running $K−1$ independent logistic regression models, in which one class is chosen as a "pivot" and then the other $K−1$ classes are separately regressed against the pivot outcome. This would proceed as follows, if class K (the last outcome) is chosen as the pivot:$$ \ln (\mathbb{P}(y=1)/\mathbb{P}(y=K)) = x^T\beta^{(1)},\\ \dots\ \dots\\ \ln (\mathbb{P}(y=K-1)/\mathbb{P}(y=K)) = x^T\beta^{(K-1)}.$$Then, the probability to choose the j-th class can be easily derived to be:$$ \mathbb{P}(y=j) = \frac{\exp(x^T\beta^{(j)})}{1+\sum_{k=1}^{K-1} \exp(x^T\beta^{(k)})},$$and subsequently, we would predict the $j^*$-th class if the $j^*=\arg\max_j \mathbb{P}(y=j)$. Notice that, for $K$ possible classes case, there are $p\times(K−1)$ unknown parameters: $\beta^{(1)},\dots,\beta^{(K−1)}$ to be estimated. Because the number of parameters increase as $K$, it is even more urge to constrain the model complexity. And the best subset selection for multinomial logistic regression aims to maximize the log-likelihood function and control the model complexity by restricting $B=(\beta^{(1)},\dots,\beta^{(K−1)})$ with $||B||_{0,2}\leq s$ where $||B||_{0,2}=\sum_{i=1}^p I(B_{i\cdot}=0)$, $B_{i\cdot}$ is the $i$-th row of coefficient matrix $B$ and $0\in R^{K-1}$ is an all zero vector. In other words, each row of $B$ would be either all zero or all non-zero. Multinomial logistic regression with `abess` PackageWe shall conduct Multinomial logistic regression on an artificial dataset for demonstration. The `abess.gene_data()` provides a simple way to generate suitable for this task. The assumption behind is the response vector following a multinomial distribution. The artifical dataset contain 100 observations and 20 predictors but only five predictors have influence on the three possible classes.
###Code
from abess.gen_data import gen_data_splicing
n = 100 # sample size
p = 20 # all predictors
k = 5 # real predictors
M = 3 # number of classes
np.random.seed(0)
dt = gen_data_splicing(n = n, p = p, k = k, family = "multinomial", M = M)
print(dt.coef_)
print('real variables\' index:\n', set(np.nonzero(dt.coef_)[0]))
###Output
[[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 1.09734231 4.03598978 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 9.91227834 -3.47987303 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 8.93282229 8.93249765 0. ]
[-4.03426165 -2.70336848 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[-5.53475149 -2.65928982 0. ]
[ 0. 0. 0. ]]
real variables' index:
{2, 5, 10, 11, 18}
###Markdown
To carry out best subset selection for multinomial logistic regression, we can call the `abessMultinomial()`. Here is an example.
###Code
from abess.linear import abessMultinomial
s = 5
model = abessMultinomial(support_size = range(0, s + 1))
model.fit(dt.x, dt.y)
###Output
_____no_output_____
###Markdown
Its use is quite similar to `abessLogistic`. We can get the coefficients to recognize "in-model" variables.
###Code
print('intercept:\n', model.intercept_)
print('coefficients:\n', model.coef_)
###Output
intercept:
[21.42326269 20.715469 22.26781623]
coefficients:
[[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ -3.48154954 5.76904948 -3.2394208 ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 23.04122134 -14.80633656 -7.28160058]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 13.76886614 11.64612255 -11.12983172]
[ -3.73875599 0.62171172 3.80279815]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ -9.19066393 -2.17011988 11.44410734]
[ 0. 0. 0. ]]
###Markdown
So those variables used in model can be recognized and we ca find that they are the same as the data's "real" coefficients we generate.
###Code
print('used variables\' index:\n', set(np.nonzero(model.coef_)[0]))
###Output
used variables' index:
{2, 5, 10, 11, 18}
###Markdown
Logistic Regression and Multinomial ExtensionWe would like to use an example to show how the best subset selection for logistic regression work in our program. Real Data Example Titanic DatasetConsider the Titanic dataset obtained from the Kaggle competition: https://www.kaggle.com/c/titanic/data. The dataset consists of data about 889 passengers, and the goal of the competition is to predict the survival (yes/no) based on features including the class of service, the sex, the age etc.
###Code
import numpy as np
import pandas as pd
dt = pd.read_csv("./train.csv")
print(dt.head(5))
###Output
PassengerId Survived Pclass \
0 1 0 3
1 2 1 1
2 3 1 3
3 4 1 1
4 5 0 3
Name Sex Age SibSp \
0 Braund, Mr. Owen Harris male 22.0 1
1 Cumings, Mrs. John Bradley (Florence Briggs Th... female 38.0 1
2 Heikkinen, Miss. Laina female 26.0 0
3 Futrelle, Mrs. Jacques Heath (Lily May Peel) female 35.0 1
4 Allen, Mr. William Henry male 35.0 0
Parch Ticket Fare Cabin Embarked
0 0 A/5 21171 7.2500 NaN S
1 0 PC 17599 71.2833 C85 C
2 0 STON/O2. 3101282 7.9250 NaN S
3 0 113803 53.1000 C123 S
4 0 373450 8.0500 NaN S
###Markdown
We only focus on some numeric or classification variables:- predictor variables: $Pclass,\ Sex,\ Age,\ SibSp,\ Parch,\ Fare,\ Embarked$;- response variable is $Survived$.
###Code
dt = dt.iloc[:, [1,2,4,5,6,7,9,11]] # variables interested
dt['Pclass'] = dt['Pclass'].astype(str)
print(dt.head(5))
###Output
Survived Pclass Sex Age SibSp Parch Fare Embarked
0 0 3 male 22.0 1 0 7.2500 S
1 1 1 female 38.0 1 0 71.2833 C
2 1 3 female 26.0 0 0 7.9250 S
3 1 1 female 35.0 1 0 53.1000 S
4 0 3 male 35.0 0 0 8.0500 S
###Markdown
However, some rows contain missing value (NaN) and we need to drop them.
###Code
dt = dt.dropna()
print('sample size: ', dt.shape)
###Output
sample size: (712, 8)
###Markdown
Then use dummy variables to replace classification variables:
###Code
dt1 = pd.get_dummies(dt)
print(dt1.head(5))
###Output
Survived Age SibSp Parch Fare Pclass_1 Pclass_2 Pclass_3 \
0 0 22.0 1 0 7.2500 0 0 1
1 1 38.0 1 0 71.2833 1 0 0
2 1 26.0 0 0 7.9250 0 0 1
3 1 35.0 1 0 53.1000 1 0 0
4 0 35.0 0 0 8.0500 0 0 1
Sex_female Sex_male Embarked_C Embarked_Q Embarked_S
0 0 1 0 0 1
1 1 0 1 0 0
2 1 0 0 0 1
3 1 0 0 0 1
4 0 1 0 0 1
###Markdown
Now we split `dt1` into training set and testing set:
###Code
from sklearn.model_selection import train_test_split
X = np.array(dt1.drop('Survived', axis = 1))
Y = np.array(dt1.Survived)
train_x, test_x, train_y, test_y = train_test_split(X, Y, test_size = 0.33, random_state = 0)
print('train size: ', train_x.shape[0])
print('test size:', test_x.shape[0])
###Output
train size: 477
test size: 235
###Markdown
Here `train_x` contains:- V0: dummy variable, 1st ticket class (1-yes, 0-no)- V1: dummy variable, 2nd ticket class (1-yes, 0-no)- V2: dummy variable, sex (1-male, 0-female)- V3: Age- V4: of siblings / spouses aboard the Titanic- V5: of parents / children aboard the Titanic- V6: Passenger fare- V7: dummy variable, Cherbourg for embarkation (1-yes, 0-no)- V8: dummy variable, Queenstown for embarkation (1-yes, 0-no)And `train_y` indicates whether the passenger survived (1-yes, 0-no).
###Code
print('train_x:\n', train_x[0:5, :])
print('train_y:\n', train_y[0:5])
###Output
train_x:
[[54. 1. 0. 59.4 1. 0. 0. 1. 0.
1. 0. 0. ]
[30. 0. 0. 8.6625 0. 0. 1. 1. 0.
0. 0. 1. ]
[47. 0. 0. 38.5 1. 0. 0. 0. 1.
0. 0. 1. ]
[28. 2. 0. 7.925 0. 0. 1. 0. 1.
0. 0. 1. ]
[29. 1. 0. 26. 0. 1. 0. 1. 0.
0. 0. 1. ]]
train_y:
[1 0 0 0 1]
###Markdown
Model FittingThe `abessLogistic()` function in the `abess.linear` allows you to perform best subset selection in a highly efficient way. For example, in the Titanic sample, if you want to look for a best subset with no more than 5 variables on the logistic model, you can call:
###Code
from abess.linear import abessLogistic
s = 5 # max target sparsity
model = abessLogistic(support_size = range(0, s + 1))
model.fit(train_x, train_y)
###Output
_____no_output_____
###Markdown
Now the `model.coef_` contains the coefficients of logistic model with no more than 5 variables. That is, those variables with a coefficient 0 is unused in the model:
###Code
print(model.coef_)
###Output
[-0.05410776 -0.53642966 0. 0. 1.74091231 0.
-1.26223831 2.7096497 0. 0. 0. 0. ]
###Markdown
By default, the `abessLogistic` function set the `support_size = range(0, min(p,n/log(n)p)` and the best support size is determined by theExtended Bayesian Information Criteria (EBIC). You can change the tunging criterion by specifying the argument `ic_type`. The available tuning criterion now are `gic`, `aic`, `bic`, `ebic`. For a quicker solution, you can change the tuning strategy to a golden section path which trys to find the elbow point of the tuning criterion over the hyperparameter space. Here we give an example.
###Code
model_gs = abessLogistic(path_type = "gs", s_min = 0, s_max = s)
model_gs.fit(train_x, train_y)
print(model_gs.coef_)
###Output
[-0.05410776 -0.53642966 0. 0. 1.74091231 0.
-1.26223831 2.7096497 0. 0. 0. 0. ]
###Markdown
where `s_min` and `s_max` bound the support size and this model give the same answer as before. More on the ResultsAfter fitting with `model.fit()`, we can further do more exploring work to interpret it. As we show above, `model.coef_` contains the sparse coefficients of variables and those non-zero values indicates "important" varibles chosen in the model.
###Code
print('Intercept: ', model.intercept_)
print('coefficients: \n', model.coef_)
print('Used variables\' index:', np.nonzero(model.coef_ != 0)[0])
###Output
Intercept: [0.57429775]
coefficients:
[-0.05410776 -0.53642966 0. 0. 1.74091231 0.
-1.26223831 2.7096497 0. 0. 0. 0. ]
Used variables' index: [0 1 4 6 7]
###Markdown
The training loss and the score under information criterion:
###Code
print('Training Loss: ', model.train_loss_)
print('IC: ', model.ic_)
###Output
Training Loss: [204.35270048]
IC: [464.39204991]
###Markdown
Prediction is allowed for the estimated model. Just call `model.predict()` function like:
###Code
fitted_y = model.predict(test_x)
print(fitted_y)
###Output
[0. 0. 1. 0. 0. 0. 1. 0. 0. 1. 1. 1. 1. 0. 0. 1. 0. 1. 0. 0. 0. 1. 1. 0.
1. 0. 0. 0. 1. 0. 0. 0. 0. 1. 1. 1. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.
0. 1. 1. 1. 1. 0. 1. 1. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 1. 0. 1. 1. 0. 1.
1. 1. 0. 1. 0. 0. 0. 0. 1. 1. 0. 1. 1. 0. 0. 0. 1. 0. 0. 0. 1. 1. 1. 0.
1. 1. 0. 0. 0. 1. 0. 0. 0. 0. 1. 0. 0. 0. 0. 1. 1. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 1. 0. 0. 0. 0. 1. 1. 0. 1. 1. 1. 1. 1. 1. 0. 0. 0. 0. 0. 1. 1.
1. 0. 0. 0. 0. 0. 0. 0. 1. 1. 1. 0. 0. 1. 1. 1. 1. 0. 1. 1. 0. 0. 0. 0.
0. 0. 1. 1. 1. 0. 1. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 1. 1. 0. 1.
1. 0. 1. 1. 0. 0. 1. 1. 0. 1. 0. 1. 0. 0. 0. 0. 0. 0. 1. 1. 1. 1. 0. 0.
1. 1. 0. 1. 1. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
###Markdown
Besides, you can also call for the survival probability of each observation by `model.predict_proba()`. Actually, those who with a probability greater than 0.5 is classified to "1" (survived).
###Code
fitted_p = model.predict_proba(test_x)
print(fitted_p)
###Output
[0.49256613 0.25942968 0.84928463 0.20204183 0.03801548 0.04022349
0.72351443 0.23115622 0.23115622 0.66834673 0.96775535 0.64905946
0.98461921 0.15238867 0.25004079 0.57640212 0.26995968 0.71264582
0.37791835 0.1771314 0.25773297 0.75392142 0.87974411 0.40251569
0.56441882 0.34057869 0.22005156 0.067159 0.57880531 0.33647767
0.15655122 0.02682661 0.14553043 0.69663788 0.89078445 0.87925152
0.91926004 0.59081387 0.42997279 0.45653474 0.38846964 0.09020182
0.05742461 0.07773719 0.0994852 0.11006334 0.9819574 0.14219863
0.1096089 0.96940171 0.71351188 0.69663788 0.63663757 0.25942968
0.54978583 0.53309793 0.07032472 0.0706292 0.86889888 0.37901167
0.43876674 0.03084541 0.14553043 0.19993615 0.29180956 0.11828599
0.94586145 0.30610513 0.98763221 0.80911714 0.25942968 0.93051703
0.9097025 0.51285362 0.04924417 0.53765354 0.48242039 0.26040948
0.09474175 0.3384564 0.55107315 0.88025271 0.09058398 0.81733446
0.86836852 0.09474175 0.04461544 0.28075505 0.78890012 0.13893026
0.02434171 0.04697945 0.70146853 0.91404969 0.66232291 0.0994852
0.93719603 0.8422183 0.1096089 0.15469685 0.15238867 0.85879022
0.22005156 0.24091195 0.21168044 0.15238867 0.60493878 0.32644935
0.26125213 0.07517093 0.13893026 0.74034636 0.84746075 0.45213182
0.0706292 0.25942968 0.22005156 0.01835698 0.14163263 0.20211369
0.15238867 0.09990237 0.23918546 0.73072611 0.26215016 0.03608545
0.03870124 0.16253688 0.74034636 0.97993672 0.08170611 0.64073592
0.84033393 0.85210036 0.80983396 0.97257783 0.63663757 0.01819022
0.04521358 0.11500215 0.35283318 0.0604244 0.80983396 0.65427173
0.56441882 0.21090587 0.09020182 0.15238867 0.09205769 0.13258298
0.07032472 0.10443874 0.67329436 0.91047691 0.87141113 0.13258298
0.13893026 0.69001575 0.9854175 0.74034636 0.95157309 0.09990237
0.97884484 0.51066947 0.04441775 0.04441775 0.28361352 0.03487023
0.49488971 0.1178021 0.64073592 0.62512052 0.97884484 0.0706292
0.50493039 0.62403068 0.86836852 0.13893026 0.17455761 0.3031159
0.07773719 0.37901167 0.11778441 0.4701259 0.40262288 0.9369219
0.17455761 0.16689812 0.66640667 0.87338811 0.24261599 0.58525135
0.76060241 0.09058398 0.958343 0.72981059 0.30511879 0.29180956
0.77425595 0.96775535 0.0858588 0.86836852 0.03084541 0.71900957
0.08726302 0.05295266 0.34866263 0.32853374 0.034404 0.15950977
0.91085503 0.52533827 0.80136124 0.55222273 0.07394554 0.24917023
0.76475846 0.73431446 0.27182894 0.8976234 0.67329436 0.04441775
0.30124969 0.97648392 0.16253688 0.14892722 0.02069282 0.28267012
0.05742461 0.05012194 0.12648308 0.06745077 0.08275843 0.09020182
0.067159 ]
###Markdown
We can also generate an ROC curve and calculate tha AUC value. On this dataset, the AUC is 0.817, which is quite close to 1.
###Code
from sklearn.metrics import roc_curve, auc
import matplotlib.pyplot as plt
fpr, tpr, _ = roc_curve(test_y, fitted_p)
plt.plot(fpr, tpr)
plt.plot([0, 1], [0, 1], 'k--')
plt.show()
print('AUC: ', auc(fpr, tpr))
###Output
_____no_output_____
###Markdown
Extension: Multi-class Classification Multinomial logistic regressionWhen the number of classes is more than 2, we call it multi-class classification task. Logistic regression can be extended to model several classes of events such as determining whether an image contains a cat, dog, lion, etc. Each object being detected in the image would be assigned a probability between 0 and 1, with a sum of one. The extended model is multinomial logistic regression.To arrive at the multinomial logistic model, one can imagine, for $K$ possible classes, running $K−1$ independent logistic regression models, in which one class is chosen as a "pivot" and then the other $K−1$ classes are separately regressed against the pivot outcome. This would proceed as follows, if class K (the last outcome) is chosen as the pivot:$$\begin{aligned} \ln (\mathbb{P}(y=1)/\mathbb{P}(y=K)) = x^T\beta^{(1)},\\ \dots\ \dots\\ \ln (\mathbb{P}(y=K-1)/\mathbb{P}(y=K)) = x^T\beta^{(K-1)}.\end{aligned}$$Then, the probability to choose the j-th class can be easily derived to be:$$ \mathbb{P}(y=j) = \frac{\exp(x^T\beta^{(j)})}{1+\sum_{k=1}^{K-1} \exp(x^T\beta^{(k)})},$$and subsequently, we would predict the $j^*$-th class if the $j^*=\arg\max_j \mathbb{P}(y=j)$. Notice that, for $K$ possible classes case, there are $p\times(K−1)$ unknown parameters: $\beta^{(1)},\dots,\beta^{(K−1)}$ to be estimated. Because the number of parameters increase as $K$, it is even more urge to constrain the model complexity. And the best subset selection for multinomial logistic regression aims to maximize the log-likelihood function and control the model complexity by restricting $B=(\beta^{(1)},\dots,\beta^{(K−1)})$ with $||B||_{0,2}\leq s$ where $||B||_{0,2}=\sum_{i=1}^p I(B_{i\cdot}=0)$, $B_{i\cdot}$ is the $i$-th row of coefficient matrix $B$ and $0\in R^{K-1}$ is an all zero vector. In other words, each row of $B$ would be either all zero or all non-zero. Simulated Data ExampleWe shall conduct Multinomial logistic regression on an artificial dataset for demonstration. The `make_multivariate_glm_data()` provides a simple way to generate suitable for this task. The assumption behind is the response vector following a multinomial distribution. The artifical dataset contain 100 observations and 20 predictors but only five predictors have influence on the three possible classes.
###Code
from abess.datasets import make_multivariate_glm_data
n = 100 # sample size
p = 20 # all predictors
k = 5 # real predictors
M = 3 # number of classes
np.random.seed(0)
dt = make_multivariate_glm_data(n = n, p = p, k = k, family = "multinomial", M = M)
print(dt.coef_)
print('real variables\' index:\n', set(np.nonzero(dt.coef_)[0]))
###Output
[[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 1.09734231 4.03598978 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 9.91227834 -3.47987303 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 8.93282229 8.93249765 0. ]
[-4.03426165 -2.70336848 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[-5.53475149 -2.65928982 0. ]
[ 0. 0. 0. ]]
real variables' index:
{2, 5, 10, 11, 18}
###Markdown
To carry out best subset selection for multinomial logistic regression, we can call the `abessMultinomial()`. Here is an example.
###Code
from abess.linear import abessMultinomial
s = 5
model = abessMultinomial(support_size = range(0, s + 1))
model.fit(dt.x, dt.y)
###Output
_____no_output_____
###Markdown
Its use is quite similar to `abessLogistic`. We can get the coefficients to recognize "in-model" variables.
###Code
print('intercept:\n', model.intercept_)
print('coefficients:\n', model.coef_)
###Output
intercept:
[21.42326269 20.715469 22.26781623]
coefficients:
[[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ -3.48154954 5.76904948 -3.2394208 ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 23.04122134 -14.80633656 -7.28160058]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 13.76886614 11.64612255 -11.12983172]
[ -3.73875599 0.62171172 3.80279815]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ -9.19066393 -2.17011988 11.44410734]
[ 0. 0. 0. ]]
###Markdown
So those variables used in model can be recognized and we ca find that they are the same as the data's "real" coefficients we generate.
###Code
print('used variables\' index:\n', set(np.nonzero(model.coef_)[0]))
###Output
used variables' index:
{2, 5, 10, 11, 18}
###Markdown
Logistic Regression and Multinomial ExtensionWe would like to use an example to show how the best subset selection for logistic regression work in our program. Real Data Example Titanic DatasetConsider the Titanic dataset obtained from the Kaggle competition: https://www.kaggle.com/c/titanic/data. The dataset consists of data about 889 passengers, and the goal of the competition is to predict the survival (yes/no) based on features including the class of service, the sex, the age etc.
###Code
import numpy as np
import pandas as pd
dt = pd.read_csv("./train.csv")
print(dt.head(5))
###Output
PassengerId Survived Pclass \
0 1 0 3
1 2 1 1
2 3 1 3
3 4 1 1
4 5 0 3
Name Sex Age SibSp \
0 Braund, Mr. Owen Harris male 22.0 1
1 Cumings, Mrs. John Bradley (Florence Briggs Th... female 38.0 1
2 Heikkinen, Miss. Laina female 26.0 0
3 Futrelle, Mrs. Jacques Heath (Lily May Peel) female 35.0 1
4 Allen, Mr. William Henry male 35.0 0
Parch Ticket Fare Cabin Embarked
0 0 A/5 21171 7.2500 NaN S
1 0 PC 17599 71.2833 C85 C
2 0 STON/O2. 3101282 7.9250 NaN S
3 0 113803 53.1000 C123 S
4 0 373450 8.0500 NaN S
###Markdown
We only focus on some numeric or classification variables:- predictor variables: $Pclass,\ Sex,\ Age,\ SibSp,\ Parch,\ Fare,\ Embarked$;- response variable is $Survived$.
###Code
dt = dt.iloc[:, [1,2,4,5,6,7,9,11]] # variables interested
dt['Pclass'] = dt['Pclass'].astype(str)
print(dt.head(5))
###Output
Survived Pclass Sex Age SibSp Parch Fare Embarked
0 0 3 male 22.0 1 0 7.2500 S
1 1 1 female 38.0 1 0 71.2833 C
2 1 3 female 26.0 0 0 7.9250 S
3 1 1 female 35.0 1 0 53.1000 S
4 0 3 male 35.0 0 0 8.0500 S
###Markdown
However, some rows contain missing value (NaN) and we need to drop them.
###Code
dt = dt.dropna()
print('sample size: ', dt.shape)
###Output
sample size: (712, 8)
###Markdown
Then use dummy variables to replace classification variables:
###Code
dt1 = pd.get_dummies(dt)
print(dt1.head(5))
###Output
Survived Age SibSp Parch Fare Pclass_1 Pclass_2 Pclass_3 \
0 0 22.0 1 0 7.2500 0 0 1
1 1 38.0 1 0 71.2833 1 0 0
2 1 26.0 0 0 7.9250 0 0 1
3 1 35.0 1 0 53.1000 1 0 0
4 0 35.0 0 0 8.0500 0 0 1
Sex_female Sex_male Embarked_C Embarked_Q Embarked_S
0 0 1 0 0 1
1 1 0 1 0 0
2 1 0 0 0 1
3 1 0 0 0 1
4 0 1 0 0 1
###Markdown
Now we split `dt1` into training set and testing set:
###Code
from sklearn.model_selection import train_test_split
X = np.array(dt1.drop('Survived', axis = 1))
Y = np.array(dt1.Survived)
train_x, test_x, train_y, test_y = train_test_split(X, y, test_size = 0.33, random_state = 0)
print('train size: ', train_x.shape[0])
print('test size:', test_x.shape[0])
###Output
train size: 477
test size: 235
###Markdown
Here `train_x` contains:- V0: dummy variable, 1st ticket class (1-yes, 0-no)- V1: dummy variable, 2nd ticket class (1-yes, 0-no)- V2: dummy variable, sex (1-male, 0-female)- V3: Age- V4: of siblings / spouses aboard the Titanic- V5: of parents / children aboard the Titanic- V6: Passenger fare- V7: dummy variable, Cherbourg for embarkation (1-yes, 0-no)- V8: dummy variable, Queenstown for embarkation (1-yes, 0-no)And `train_y` indicates whether the passenger survived (1-yes, 0-no).
###Code
print('train_x:\n', train_x[0:5, :])
print('train_y:\n', train_y[0:5])
###Output
train_x:
[[54. 1. 0. 59.4 1. 0. 0. 1. 0.
1. 0. 0. ]
[30. 0. 0. 8.6625 0. 0. 1. 1. 0.
0. 0. 1. ]
[47. 0. 0. 38.5 1. 0. 0. 0. 1.
0. 0. 1. ]
[28. 2. 0. 7.925 0. 0. 1. 0. 1.
0. 0. 1. ]
[29. 1. 0. 26. 0. 1. 0. 1. 0.
0. 0. 1. ]]
train_y:
[1 0 0 0 1]
###Markdown
Model FittingThe `abessLogistic()` function in the `abess.linear` allows you to perform best subset selection in a highly efficient way. For example, in the Titanic sample, if you want to look for a best subset with no more than 5 variables on the logistic model, you can call:
###Code
from abess.linear import abessLogistic
s = 5 # target sparsity
model = abessLogistic(support_size = range(0, s + 1))
model.fit(train_x, train_y)
###Output
_____no_output_____
###Markdown
Now the `model.coef_` contains the coefficients of logistic model with no more than 5 variables. That is, those variables with a coefficient 0 is unused in the model:
###Code
print(model.coef_)
###Output
[-0.05410776 -0.53642966 0. 0. 1.74091231 0.
-1.26223831 2.7096497 0. 0. 0. 0. ]
###Markdown
By default, the `abessLogistic` function set the `support_size = range(0, min(p,n/log(n)p)` and the best support size is determined by theExtended Bayesian Information Criteria (EBIC). You can change the tunging criterion by specifying the argument `ic_type`. The available tuning criterion now are `gic`, `aic`, `bic`, `ebic`. For a quicker solution, you can change the tuning strategy to a golden section path which trys to find the elbow point of the tuning criterion over the hyperparameter space. Here we give an example.
###Code
model_gs = abessLogistic(path_type = "pgs", s_min = 0, s_max = s)
model_gs.fit(train_x, train_y)
print(model_gs.coef_)
###Output
[-0.05410776 -0.53642966 0. 0. 1.74091231 0.
-1.26223831 2.7096497 0. 0. 0. 0. ]
###Markdown
where `s_min` and `s_max` bound the support size and this model give the same answer as before. More on the ResultsAfter fitting with `model.fit()`, we can further do more exploring work to interpret it. As we show above, `model.coef_` contains the sparse coefficients of variables and those non-zero values indicates "important" varibles chosen in the model.
###Code
print('Intercept: ', model.intercept_)
print('coefficients: \n', model.coef_)
print('Used variables\' index:', np.nonzero(model.coef_ != 0)[0])
###Output
Intercept: [0.57429775]
coefficients:
[-0.05410776 -0.53642966 0. 0. 1.74091231 0.
-1.26223831 2.7096497 0. 0. 0. 0. ]
Used variables' index: [0 1 4 6 7]
###Markdown
The training loss and the score under information criterion:
###Code
print('Training Loss: ', model.train_loss_)
print('IC: ', model.ic_)
###Output
Training Loss: [204.35270048]
IC: [464.39204991]
###Markdown
Prediction is allowed for the estimated model. Just call `model.predict()` function like:
###Code
fitted_y = model.predict(test_x)
print(fitted_y)
###Output
[0. 0. 1. 0. 0. 0. 1. 0. 0. 1. 1. 1. 1. 0. 0. 1. 0. 1. 0. 0. 0. 1. 1. 0.
1. 0. 0. 0. 1. 0. 0. 0. 0. 1. 1. 1. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.
0. 1. 1. 1. 1. 0. 1. 1. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 1. 0. 1. 1. 0. 1.
1. 1. 0. 1. 0. 0. 0. 0. 1. 1. 0. 1. 1. 0. 0. 0. 1. 0. 0. 0. 1. 1. 1. 0.
1. 1. 0. 0. 0. 1. 0. 0. 0. 0. 1. 0. 0. 0. 0. 1. 1. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 1. 0. 0. 0. 0. 1. 1. 0. 1. 1. 1. 1. 1. 1. 0. 0. 0. 0. 0. 1. 1.
1. 0. 0. 0. 0. 0. 0. 0. 1. 1. 1. 0. 0. 1. 1. 1. 1. 0. 1. 1. 0. 0. 0. 0.
0. 0. 1. 1. 1. 0. 1. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 1. 1. 0. 1.
1. 0. 1. 1. 0. 0. 1. 1. 0. 1. 0. 1. 0. 0. 0. 0. 0. 0. 1. 1. 1. 1. 0. 0.
1. 1. 0. 1. 1. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
###Markdown
Besides, you can also call for the survival probability of each observation by `model.predict_proba()`. Actually, those who with a probability greater than 0.5 is classified to "1" (survived).
###Code
fitted_p = model.predict_proba(test_x)
print(fitted_p)
###Output
[0.49256613 0.25942968 0.84928463 0.20204183 0.03801548 0.04022349
0.72351443 0.23115622 0.23115622 0.66834673 0.96775535 0.64905946
0.98461921 0.15238867 0.25004079 0.57640212 0.26995968 0.71264582
0.37791835 0.1771314 0.25773297 0.75392142 0.87974411 0.40251569
0.56441882 0.34057869 0.22005156 0.067159 0.57880531 0.33647767
0.15655122 0.02682661 0.14553043 0.69663788 0.89078445 0.87925152
0.91926004 0.59081387 0.42997279 0.45653474 0.38846964 0.09020182
0.05742461 0.07773719 0.0994852 0.11006334 0.9819574 0.14219863
0.1096089 0.96940171 0.71351188 0.69663788 0.63663757 0.25942968
0.54978583 0.53309793 0.07032472 0.0706292 0.86889888 0.37901167
0.43876674 0.03084541 0.14553043 0.19993615 0.29180956 0.11828599
0.94586145 0.30610513 0.98763221 0.80911714 0.25942968 0.93051703
0.9097025 0.51285362 0.04924417 0.53765354 0.48242039 0.26040948
0.09474175 0.3384564 0.55107315 0.88025271 0.09058398 0.81733446
0.86836852 0.09474175 0.04461544 0.28075505 0.78890012 0.13893026
0.02434171 0.04697945 0.70146853 0.91404969 0.66232291 0.0994852
0.93719603 0.8422183 0.1096089 0.15469685 0.15238867 0.85879022
0.22005156 0.24091195 0.21168044 0.15238867 0.60493878 0.32644935
0.26125213 0.07517093 0.13893026 0.74034636 0.84746075 0.45213182
0.0706292 0.25942968 0.22005156 0.01835698 0.14163263 0.20211369
0.15238867 0.09990237 0.23918546 0.73072611 0.26215016 0.03608545
0.03870124 0.16253688 0.74034636 0.97993672 0.08170611 0.64073592
0.84033393 0.85210036 0.80983396 0.97257783 0.63663757 0.01819022
0.04521358 0.11500215 0.35283318 0.0604244 0.80983396 0.65427173
0.56441882 0.21090587 0.09020182 0.15238867 0.09205769 0.13258298
0.07032472 0.10443874 0.67329436 0.91047691 0.87141113 0.13258298
0.13893026 0.69001575 0.9854175 0.74034636 0.95157309 0.09990237
0.97884484 0.51066947 0.04441775 0.04441775 0.28361352 0.03487023
0.49488971 0.1178021 0.64073592 0.62512052 0.97884484 0.0706292
0.50493039 0.62403068 0.86836852 0.13893026 0.17455761 0.3031159
0.07773719 0.37901167 0.11778441 0.4701259 0.40262288 0.9369219
0.17455761 0.16689812 0.66640667 0.87338811 0.24261599 0.58525135
0.76060241 0.09058398 0.958343 0.72981059 0.30511879 0.29180956
0.77425595 0.96775535 0.0858588 0.86836852 0.03084541 0.71900957
0.08726302 0.05295266 0.34866263 0.32853374 0.034404 0.15950977
0.91085503 0.52533827 0.80136124 0.55222273 0.07394554 0.24917023
0.76475846 0.73431446 0.27182894 0.8976234 0.67329436 0.04441775
0.30124969 0.97648392 0.16253688 0.14892722 0.02069282 0.28267012
0.05742461 0.05012194 0.12648308 0.06745077 0.08275843 0.09020182
0.067159 ]
###Markdown
We can also generate an ROC curve and calculate tha AUC value. On this dataset, the AUC is 0.817, which is quite close to 1.
###Code
from sklearn.metrics import roc_curve, auc
import matplotlib.pyplot as plt
fpr, tpr, _ = roc_curve(test_y, fitted_p)
plt.plot(fpr, tpr)
plt.plot([0, 1], [0, 1], 'k--')
plt.show()
print('AUC: ', auc(fpr, tpr))
###Output
_____no_output_____
###Markdown
Extension: Multi-class Classification Multinomial logistic regressionWhen the number of classes is more than 2, we call it multi-class classification task. Logistic regression can be extended to model several classes of events such as determining whether an image contains a cat, dog, lion, etc. Each object being detected in the image would be assigned a probability between 0 and 1, with a sum of one. The extended model is multinomial logistic regression.To arrive at the multinomial logistic model, one can imagine, for $K$ possible classes, running $K−1$ independent logistic regression models, in which one class is chosen as a "pivot" and then the other $K−1$ classes are separately regressed against the pivot outcome. This would proceed as follows, if class K (the last outcome) is chosen as the pivot:$$\begin{aligned} \ln (\mathbb{P}(y=1)/\mathbb{P}(y=K)) = x^T\beta^{(1)},\\ \dots\ \dots\\ \ln (\mathbb{P}(y=K-1)/\mathbb{P}(y=K)) = x^T\beta^{(K-1)}.\end{aligned}$$Then, the probability to choose the j-th class can be easily derived to be:$$ \mathbb{P}(y=j) = \frac{\exp(x^T\beta^{(j)})}{1+\sum_{k=1}^{K-1} \exp(x^T\beta^{(k)})},$$and subsequently, we would predict the $j^*$-th class if the $j^*=\arg\max_j \mathbb{P}(y=j)$. Notice that, for $K$ possible classes case, there are $p\times(K−1)$ unknown parameters: $\beta^{(1)},\dots,\beta^{(K−1)}$ to be estimated. Because the number of parameters increase as $K$, it is even more urge to constrain the model complexity. And the best subset selection for multinomial logistic regression aims to maximize the log-likelihood function and control the model complexity by restricting $B=(\beta^{(1)},\dots,\beta^{(K−1)})$ with $||B||_{0,2}\leq s$ where $||B||_{0,2}=\sum_{i=1}^p I(B_{i\cdot}=0)$, $B_{i\cdot}$ is the $i$-th row of coefficient matrix $B$ and $0\in R^{K-1}$ is an all zero vector. In other words, each row of $B$ would be either all zero or all non-zero. Simulated Data ExampleWe shall conduct Multinomial logistic regression on an artificial dataset for demonstration. The `make_multivariate_glm_data()` provides a simple way to generate suitable for this task. The assumption behind is the response vector following a multinomial distribution. The artifical dataset contain 100 observations and 20 predictors but only five predictors have influence on the three possible classes.
###Code
from abess.datasets import make_multivariate_glm_data
n = 100 # sample size
p = 20 # all predictors
k = 5 # real predictors
M = 3 # number of classes
np.random.seed(0)
dt = make_multivariate_glm_data(n = n, p = p, k = k, family = "multinomial", M = M)
print(dt.coef_)
print('real variables\' index:\n', set(np.nonzero(dt.coef_)[0]))
###Output
[[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 1.09734231 4.03598978 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 9.91227834 -3.47987303 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 8.93282229 8.93249765 0. ]
[-4.03426165 -2.70336848 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[-5.53475149 -2.65928982 0. ]
[ 0. 0. 0. ]]
real variables' index:
{2, 5, 10, 11, 18}
###Markdown
To carry out best subset selection for multinomial logistic regression, we can call the `abessMultinomial()`. Here is an example.
###Code
from abess.linear import abessMultinomial
s = 5
model = abessMultinomial(support_size = range(0, s + 1))
model.fit(dt.x, dt.y)
###Output
_____no_output_____
###Markdown
Its use is quite similar to `abessLogistic`. We can get the coefficients to recognize "in-model" variables.
###Code
print('intercept:\n', model.intercept_)
print('coefficients:\n', model.coef_)
###Output
intercept:
[21.42326269 20.715469 22.26781623]
coefficients:
[[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ -3.48154954 5.76904948 -3.2394208 ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 23.04122134 -14.80633656 -7.28160058]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 13.76886614 11.64612255 -11.12983172]
[ -3.73875599 0.62171172 3.80279815]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ -9.19066393 -2.17011988 11.44410734]
[ 0. 0. 0. ]]
###Markdown
So those variables used in model can be recognized and we ca find that they are the same as the data's "real" coefficients we generate.
###Code
print('used variables\' index:\n', set(np.nonzero(model.coef_)[0]))
###Output
used variables' index:
{2, 5, 10, 11, 18}
###Markdown
Logistic Regression and Multinomial ExtensionWe would like to use an example to show how the best subset selection for logistic regression work in our program. Real Data Example Titanic DatasetConsider the Titanic dataset obtained from the Kaggle competition: https://www.kaggle.com/c/titanic/data. The dataset consists of data about 889 passengers, and the goal of the competition is to predict the survival (yes/no) based on features including the class of service, the sex, the age etc.
###Code
import numpy as np
import pandas as pd
dt = pd.read_csv("./train.csv")
print(dt.head(5))
###Output
PassengerId Survived Pclass \
0 1 0 3
1 2 1 1
2 3 1 3
3 4 1 1
4 5 0 3
Name Sex Age SibSp \
0 Braund, Mr. Owen Harris male 22.0 1
1 Cumings, Mrs. John Bradley (Florence Briggs Th... female 38.0 1
2 Heikkinen, Miss. Laina female 26.0 0
3 Futrelle, Mrs. Jacques Heath (Lily May Peel) female 35.0 1
4 Allen, Mr. William Henry male 35.0 0
Parch Ticket Fare Cabin Embarked
0 0 A/5 21171 7.2500 NaN S
1 0 PC 17599 71.2833 C85 C
2 0 STON/O2. 3101282 7.9250 NaN S
3 0 113803 53.1000 C123 S
4 0 373450 8.0500 NaN S
###Markdown
We only focus on some numeric or classification variables:- predictor variables: $Pclass,\ Sex,\ Age,\ SibSp,\ Parch,\ Fare,\ Embarked$;- response variable is $Survived$.
###Code
dt = dt.iloc[:, [1,2,4,5,6,7,9,11]] # variables interested
dt['Pclass'] = dt['Pclass'].astype(str)
print(dt.head(5))
###Output
Survived Pclass Sex Age SibSp Parch Fare Embarked
0 0 3 male 22.0 1 0 7.2500 S
1 1 1 female 38.0 1 0 71.2833 C
2 1 3 female 26.0 0 0 7.9250 S
3 1 1 female 35.0 1 0 53.1000 S
4 0 3 male 35.0 0 0 8.0500 S
###Markdown
However, some rows contain missing value (NaN) and we need to drop them.
###Code
dt = dt.dropna()
print('sample size: ', dt.shape)
###Output
sample size: (712, 8)
###Markdown
Then use dummy variables to replace classification variables:
###Code
dt1 = pd.get_dummies(dt)
print(dt1.head(5))
###Output
Survived Age SibSp Parch Fare Pclass_1 Pclass_2 Pclass_3 \
0 0 22.0 1 0 7.2500 0 0 1
1 1 38.0 1 0 71.2833 1 0 0
2 1 26.0 0 0 7.9250 0 0 1
3 1 35.0 1 0 53.1000 1 0 0
4 0 35.0 0 0 8.0500 0 0 1
Sex_female Sex_male Embarked_C Embarked_Q Embarked_S
0 0 1 0 0 1
1 1 0 1 0 0
2 1 0 0 0 1
3 1 0 0 0 1
4 0 1 0 0 1
###Markdown
Now we split `dt1` into training set and testing set:
###Code
from sklearn.model_selection import train_test_split
X = np.array(dt1.drop('Survived', axis = 1))
Y = np.array(dt1.Survived)
train_x, test_x, train_y, test_y = train_test_split(X, Y, test_size = 0.33, random_state = 0)
print('train size: ', train_x.shape[0])
print('test size:', test_x.shape[0])
###Output
train size: 477
test size: 235
###Markdown
Here `train_x` contains:- V0: dummy variable, 1st ticket class (1-yes, 0-no)- V1: dummy variable, 2nd ticket class (1-yes, 0-no)- V2: dummy variable, sex (1-male, 0-female)- V3: Age- V4: of siblings / spouses aboard the Titanic- V5: of parents / children aboard the Titanic- V6: Passenger fare- V7: dummy variable, Cherbourg for embarkation (1-yes, 0-no)- V8: dummy variable, Queenstown for embarkation (1-yes, 0-no)And `train_y` indicates whether the passenger survived (1-yes, 0-no).
###Code
print('train_x:\n', train_x[0:5, :])
print('train_y:\n', train_y[0:5])
###Output
train_x:
[[54. 1. 0. 59.4 1. 0. 0. 1. 0.
1. 0. 0. ]
[30. 0. 0. 8.6625 0. 0. 1. 1. 0.
0. 0. 1. ]
[47. 0. 0. 38.5 1. 0. 0. 0. 1.
0. 0. 1. ]
[28. 2. 0. 7.925 0. 0. 1. 0. 1.
0. 0. 1. ]
[29. 1. 0. 26. 0. 1. 0. 1. 0.
0. 0. 1. ]]
train_y:
[1 0 0 0 1]
###Markdown
Model FittingThe `LogisticRegression()` function in the `abess.linear` allows you to perform best subset selection in a highly efficient way. For example, in the Titanic sample, if you want to look for a best subset with no more than 5 variables on the logistic model, you can call:
###Code
from abess.linear import LogisticRegression
s = 5 # max target sparsity
model = LogisticRegression(support_size = range(0, s + 1))
model.fit(train_x, train_y)
###Output
_____no_output_____
###Markdown
Now the `model.coef_` contains the coefficients of logistic model with no more than 5 variables. That is, those variables with a coefficient 0 is unused in the model:
###Code
print(model.coef_)
###Output
[-0.05410776 -0.53642966 0. 0. 1.74091231 0.
-1.26223831 2.7096497 0. 0. 0. 0. ]
###Markdown
By default, the `LogisticRegression` function set the `support_size = range(0, min(p,n/log(n)p)` and the best support size is determined by theExtended Bayesian Information Criteria (EBIC). You can change the tunging criterion by specifying the argument `ic_type`. The available tuning criterion now are `gic`, `aic`, `bic`, `ebic`. For a quicker solution, you can change the tuning strategy to a golden section path which trys to find the elbow point of the tuning criterion over the hyperparameter space. Here we give an example.
###Code
model_gs = LogisticRegression(path_type = "gs", s_min = 0, s_max = s)
model_gs.fit(train_x, train_y)
print(model_gs.coef_)
###Output
[-0.05410776 -0.53642966 0. 0. 1.74091231 0.
-1.26223831 2.7096497 0. 0. 0. 0. ]
###Markdown
where `s_min` and `s_max` bound the support size and this model give the same answer as before. More on the ResultsAfter fitting with `model.fit()`, we can further do more exploring work to interpret it. As we show above, `model.coef_` contains the sparse coefficients of variables and those non-zero values indicates "important" varibles chosen in the model.
###Code
print('Intercept: ', model.intercept_)
print('coefficients: \n', model.coef_)
print('Used variables\' index:', np.nonzero(model.coef_ != 0)[0])
###Output
Intercept: [0.57429775]
coefficients:
[-0.05410776 -0.53642966 0. 0. 1.74091231 0.
-1.26223831 2.7096497 0. 0. 0. 0. ]
Used variables' index: [0 1 4 6 7]
###Markdown
The training loss and the score under information criterion:
###Code
print('Training Loss: ', model.train_loss_)
print('IC: ', model.ic_)
###Output
Training Loss: [204.35270048]
IC: [464.39204991]
###Markdown
Prediction is allowed for the estimated model. Just call `model.predict()` function like:
###Code
fitted_y = model.predict(test_x)
print(fitted_y)
###Output
[0. 0. 1. 0. 0. 0. 1. 0. 0. 1. 1. 1. 1. 0. 0. 1. 0. 1. 0. 0. 0. 1. 1. 0.
1. 0. 0. 0. 1. 0. 0. 0. 0. 1. 1. 1. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.
0. 1. 1. 1. 1. 0. 1. 1. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 1. 0. 1. 1. 0. 1.
1. 1. 0. 1. 0. 0. 0. 0. 1. 1. 0. 1. 1. 0. 0. 0. 1. 0. 0. 0. 1. 1. 1. 0.
1. 1. 0. 0. 0. 1. 0. 0. 0. 0. 1. 0. 0. 0. 0. 1. 1. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 1. 0. 0. 0. 0. 1. 1. 0. 1. 1. 1. 1. 1. 1. 0. 0. 0. 0. 0. 1. 1.
1. 0. 0. 0. 0. 0. 0. 0. 1. 1. 1. 0. 0. 1. 1. 1. 1. 0. 1. 1. 0. 0. 0. 0.
0. 0. 1. 1. 1. 0. 1. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 1. 1. 0. 1.
1. 0. 1. 1. 0. 0. 1. 1. 0. 1. 0. 1. 0. 0. 0. 0. 0. 0. 1. 1. 1. 1. 0. 0.
1. 1. 0. 1. 1. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
###Markdown
Besides, you can also call for the survival probability of each observation by `model.predict_proba()`. Actually, those who with a probability greater than 0.5 is classified to "1" (survived).
###Code
fitted_p = model.predict_proba(test_x)
print(fitted_p)
###Output
[0.49256613 0.25942968 0.84928463 0.20204183 0.03801548 0.04022349
0.72351443 0.23115622 0.23115622 0.66834673 0.96775535 0.64905946
0.98461921 0.15238867 0.25004079 0.57640212 0.26995968 0.71264582
0.37791835 0.1771314 0.25773297 0.75392142 0.87974411 0.40251569
0.56441882 0.34057869 0.22005156 0.067159 0.57880531 0.33647767
0.15655122 0.02682661 0.14553043 0.69663788 0.89078445 0.87925152
0.91926004 0.59081387 0.42997279 0.45653474 0.38846964 0.09020182
0.05742461 0.07773719 0.0994852 0.11006334 0.9819574 0.14219863
0.1096089 0.96940171 0.71351188 0.69663788 0.63663757 0.25942968
0.54978583 0.53309793 0.07032472 0.0706292 0.86889888 0.37901167
0.43876674 0.03084541 0.14553043 0.19993615 0.29180956 0.11828599
0.94586145 0.30610513 0.98763221 0.80911714 0.25942968 0.93051703
0.9097025 0.51285362 0.04924417 0.53765354 0.48242039 0.26040948
0.09474175 0.3384564 0.55107315 0.88025271 0.09058398 0.81733446
0.86836852 0.09474175 0.04461544 0.28075505 0.78890012 0.13893026
0.02434171 0.04697945 0.70146853 0.91404969 0.66232291 0.0994852
0.93719603 0.8422183 0.1096089 0.15469685 0.15238867 0.85879022
0.22005156 0.24091195 0.21168044 0.15238867 0.60493878 0.32644935
0.26125213 0.07517093 0.13893026 0.74034636 0.84746075 0.45213182
0.0706292 0.25942968 0.22005156 0.01835698 0.14163263 0.20211369
0.15238867 0.09990237 0.23918546 0.73072611 0.26215016 0.03608545
0.03870124 0.16253688 0.74034636 0.97993672 0.08170611 0.64073592
0.84033393 0.85210036 0.80983396 0.97257783 0.63663757 0.01819022
0.04521358 0.11500215 0.35283318 0.0604244 0.80983396 0.65427173
0.56441882 0.21090587 0.09020182 0.15238867 0.09205769 0.13258298
0.07032472 0.10443874 0.67329436 0.91047691 0.87141113 0.13258298
0.13893026 0.69001575 0.9854175 0.74034636 0.95157309 0.09990237
0.97884484 0.51066947 0.04441775 0.04441775 0.28361352 0.03487023
0.49488971 0.1178021 0.64073592 0.62512052 0.97884484 0.0706292
0.50493039 0.62403068 0.86836852 0.13893026 0.17455761 0.3031159
0.07773719 0.37901167 0.11778441 0.4701259 0.40262288 0.9369219
0.17455761 0.16689812 0.66640667 0.87338811 0.24261599 0.58525135
0.76060241 0.09058398 0.958343 0.72981059 0.30511879 0.29180956
0.77425595 0.96775535 0.0858588 0.86836852 0.03084541 0.71900957
0.08726302 0.05295266 0.34866263 0.32853374 0.034404 0.15950977
0.91085503 0.52533827 0.80136124 0.55222273 0.07394554 0.24917023
0.76475846 0.73431446 0.27182894 0.8976234 0.67329436 0.04441775
0.30124969 0.97648392 0.16253688 0.14892722 0.02069282 0.28267012
0.05742461 0.05012194 0.12648308 0.06745077 0.08275843 0.09020182
0.067159 ]
###Markdown
We can also generate an ROC curve and calculate tha AUC value. On this dataset, the AUC is 0.817, which is quite close to 1.
###Code
from sklearn.metrics import roc_curve, auc
import matplotlib.pyplot as plt
fpr, tpr, _ = roc_curve(test_y, fitted_p)
plt.plot(fpr, tpr)
plt.plot([0, 1], [0, 1], 'k--')
plt.show()
print('AUC: ', auc(fpr, tpr))
###Output
_____no_output_____
###Markdown
Extension: Multi-class Classification Multinomial logistic regressionWhen the number of classes is more than 2, we call it multi-class classification task. Logistic regression can be extended to model several classes of events such as determining whether an image contains a cat, dog, lion, etc. Each object being detected in the image would be assigned a probability between 0 and 1, with a sum of one. The extended model is multinomial logistic regression.To arrive at the multinomial logistic model, one can imagine, for $K$ possible classes, running $K−1$ independent logistic regression models, in which one class is chosen as a "pivot" and then the other $K−1$ classes are separately regressed against the pivot outcome. This would proceed as follows, if class K (the last outcome) is chosen as the pivot:$$\begin{aligned} \ln (\mathbb{P}(y=1)/\mathbb{P}(y=K)) = x^T\beta^{(1)},\\ \dots\ \dots\\ \ln (\mathbb{P}(y=K-1)/\mathbb{P}(y=K)) = x^T\beta^{(K-1)}.\end{aligned}$$Then, the probability to choose the j-th class can be easily derived to be:$$ \mathbb{P}(y=j) = \frac{\exp(x^T\beta^{(j)})}{1+\sum_{k=1}^{K-1} \exp(x^T\beta^{(k)})},$$and subsequently, we would predict the $j^*$-th class if the $j^*=\arg\max_j \mathbb{P}(y=j)$. Notice that, for $K$ possible classes case, there are $p\times(K−1)$ unknown parameters: $\beta^{(1)},\dots,\beta^{(K−1)}$ to be estimated. Because the number of parameters increase as $K$, it is even more urge to constrain the model complexity. And the best subset selection for multinomial logistic regression aims to maximize the log-likelihood function and control the model complexity by restricting $B=(\beta^{(1)},\dots,\beta^{(K−1)})$ with $||B||_{0,2}\leq s$ where $||B||_{0,2}=\sum_{i=1}^p I(B_{i\cdot}=0)$, $B_{i\cdot}$ is the $i$-th row of coefficient matrix $B$ and $0\in R^{K-1}$ is an all zero vector. In other words, each row of $B$ would be either all zero or all non-zero. Simulated Data ExampleWe shall conduct Multinomial logistic regression on an artificial dataset for demonstration. The `make_multivariate_glm_data()` provides a simple way to generate suitable for this task. The assumption behind is the response vector following a multinomial distribution. The artifical dataset contain 100 observations and 20 predictors but only five predictors have influence on the three possible classes.
###Code
from abess.datasets import make_multivariate_glm_data
n = 100 # sample size
p = 20 # all predictors
k = 5 # real predictors
M = 3 # number of classes
np.random.seed(0)
dt = make_multivariate_glm_data(n = n, p = p, k = k, family = "multinomial", M = M)
print(dt.coef_)
print('real variables\' index:\n', set(np.nonzero(dt.coef_)[0]))
###Output
[[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 1.09734231 4.03598978 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 9.91227834 -3.47987303 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 8.93282229 8.93249765 0. ]
[-4.03426165 -2.70336848 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[-5.53475149 -2.65928982 0. ]
[ 0. 0. 0. ]]
real variables' index:
{2, 5, 10, 11, 18}
###Markdown
To carry out best subset selection for multinomial logistic regression, we can call the `MultinomialRegression()`. Here is an example.
###Code
from abess.linear import MultinomialRegression
s = 5
model = MultinomialRegression(support_size = range(0, s + 1))
model.fit(dt.x, dt.y)
###Output
_____no_output_____
###Markdown
Its use is quite similar to `LogisticRegression`. We can get the coefficients to recognize "in-model" variables.
###Code
print('intercept:\n', model.intercept_)
print('coefficients:\n', model.coef_)
###Output
intercept:
[21.42326269 20.715469 22.26781623]
coefficients:
[[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ -3.48154954 5.76904948 -3.2394208 ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 23.04122134 -14.80633656 -7.28160058]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 13.76886614 11.64612255 -11.12983172]
[ -3.73875599 0.62171172 3.80279815]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ -9.19066393 -2.17011988 11.44410734]
[ 0. 0. 0. ]]
###Markdown
So those variables used in model can be recognized and we ca find that they are the same as the data's "real" coefficients we generate.
###Code
print('used variables\' index:\n', set(np.nonzero(model.coef_)[0]))
###Output
used variables' index:
{2, 5, 10, 11, 18}
|
Sequences, Time Series and Prediction/Week 1/Create and Predict Synthetic Data.ipynb | ###Markdown
Now that we have the time series, let's split it so we can start forecasting
###Code
split_time = 1100
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
plt.figure(figsize=(10, 6))
plot_series(time_train, x_train)
plt.show()
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plt.show()
###Output
_____no_output_____
###Markdown
Naive Forecast
###Code
naive_forecast = series[split_time - 1:-1]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, naive_forecast)
###Output
_____no_output_____
###Markdown
Let's zoom in on the start of the validation period:
###Code
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid, start=0, end=150)
plot_series(time_valid, naive_forecast, start=1, end=151)
###Output
_____no_output_____
###Markdown
You can see that the naive forecast lags 1 step behind the time series. Now let's compute the mean squared error and the mean absolute error between the forecasts and the predictions in the validation period:
###Code
print(keras.metrics.mean_squared_error(x_valid, naive_forecast).numpy())
print(keras.metrics.mean_absolute_error(x_valid, naive_forecast).numpy())
###Output
19.578304
2.6011972
###Markdown
That's our baseline, now let's try a moving average:
###Code
def moving_average_forecast(series, window_size):
"""Forecasts the mean of the last few values.
If window_size=1, then this is equivalent to naive forecast"""
forecast = []
for time in range(len(series) - window_size):
forecast.append(series[time:time + window_size].mean())
return np.array(forecast)
moving_avg = moving_average_forecast(series, 30)[split_time - 30:]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, moving_avg)
print(keras.metrics.mean_squared_error(x_valid, moving_avg).numpy())
print(keras.metrics.mean_absolute_error(x_valid, moving_avg).numpy())
###Output
65.786224
4.3040023
###Markdown
That's worse than naive forecast! The moving average does not anticipate trend or seasonality, so let's try to remove them by using differencing. Since the seasonality period is 365 days, we will subtract the value at time *t* – 365 from the value at time *t*.
###Code
diff_series = (series[365:] - series[:-365])
diff_time = time[365:]
plt.figure(figsize=(10, 6))
plot_series(diff_time, diff_series)
plt.show()
###Output
_____no_output_____
###Markdown
Great, the trend and seasonality seem to be gone, so now we can use the moving average:
###Code
diff_moving_avg = moving_average_forecast(diff_series, 50)[split_time - 365 - 50:]
plt.figure(figsize=(10, 6))
plot_series(time_valid, diff_series[split_time - 365:])
plot_series(time_valid, diff_moving_avg)
plt.show()
###Output
_____no_output_____
###Markdown
Now let's bring back the trend and seasonality by adding the past values from t – 365:
###Code
diff_moving_avg_plus_past = series[split_time - 365:-365] + diff_moving_avg
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, diff_moving_avg_plus_past)
plt.show()
print(keras.metrics.mean_squared_error(x_valid, diff_moving_avg_plus_past).numpy())
print(keras.metrics.mean_absolute_error(x_valid, diff_moving_avg_plus_past).numpy())
###Output
8.498155
2.327179
###Markdown
Better than naive forecast, good. However the forecasts look a bit too random, because we're just adding past values, which were noisy. Let's use a moving averaging on past values to remove some of the noise:
###Code
diff_moving_avg_plus_smooth_past = moving_average_forecast(series[split_time - 370:-360], 10) + diff_moving_avg
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, diff_moving_avg_plus_smooth_past)
plt.show()
print(keras.metrics.mean_squared_error(x_valid, diff_moving_avg_plus_smooth_past).numpy())
print(keras.metrics.mean_absolute_error(x_valid, diff_moving_avg_plus_smooth_past).numpy())
###Output
12.527956
2.2034435
|
.ipynb_checkpoints/MC_Classifier_NN-checkpoint 21.ipynb | ###Markdown
Sequencial NN
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from config.config import *
from config.constants import *
from keras.callbacks import ModelCheckpoint
from keras.models import load_model
from sklearn.metrics import accuracy_score
from collections import Counter
from sklearn.model_selection import train_test_split
def plot_model(hist):
fig, axs = plt.subplots(nrows=1, figsize=(11, 9))
plt.rcParams['font.size'] = '14'
for label in (axs.get_xticklabels() + axs.get_yticklabels()):
label.set_fontsize(14)
plt.plot(hist.history['accuracy'])
plt.plot(hist.history['val_accuracy'])
axs.set_title('Model Accuracy')
axs.set_ylabel('Accuracy', fontsize=14)
axs.set_xlabel('Epoch', fontsize=14)
plt.legend(['train', 'val'], loc='upper left')
plt.show()
print("Model has training accuracy of {:.2f}%".format(hist.history['accuracy'][-1]*100))
def pre_process_split(path):
dataset = pd.read_csv(path)
dataset.dropna(inplace = True)
# assigning new column names to the dataframe
# dataset.columns = constants.cols + ['label']
# creating training set ignoring labels
train_data = dataset[dataset.columns[:-1]].values
labels = dataset['label'].values
n_class = len(set(labels))
X_train, X_test, y_train, y_test = train_test_split(train_data, labels, test_size=0.20)
X_train = X_train.reshape(-1, 1, train_data.shape[1])
X_test = X_test.reshape(-1, 1, train_data.shape[1])
y_train = y_train.reshape(-1, 1, 1)
y_test = y_test.reshape(-1, 1, 1)
return X_train, X_test, y_train, y_test, n_class
def model_config_train(name,eps,bs,actvn,datalink):
print("processing dataset")
X_train, X_test, y_train, y_test, n_class = pre_process_split(datalink)
print(n_class)
model = Sequential()
model.add(LSTM(100, input_shape=(X_train.shape[1], X_train.shape[2])))
model.add(Dense(n_class, activation=actvn))
print(model.summary())
chk = ModelCheckpoint(name+'.pkl',save_best_only=True, mode='auto', verbose=1)
print("saving as:",name+'.pkl')
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
hist = model.fit(X_train, y_train, epochs=eps, batch_size=bs, callbacks=[chk], validation_split=0.2)
plot_model(hist)
return model
###Output
_____no_output_____
###Markdown
Loading dataset for binary classifier
###Code
def plotter(plot_data,unique_labels,n_plots):
data = plot_data.copy()
predicted_labels = data['label']
#print(len(set(predicted_labels)),unique_labels)
#print(Counter(predicted_labels).values(),[unique_labels[each] for each in Counter(predicted_labels).keys()])
matrics = sorted(zip([unique_labels[each] for each in Counter(predicted_labels).keys()],Counter(predicted_labels).values() ), key=lambda x: x[1])
score = [list(j) for j in matrics][::-1]
total = sum([i[1] for i in score])
c=0
for i in score:
score[c][1] = str(round(i[1]*100/total,2))+"%"
#print("Fault type:", i[-1], "Percentage: {:.2f}%".format(i[1]*100/total))
c+=1
print(pd.DataFrame.from_records(score,columns=['Fault type','Percentage']))
#print("changing numbers to labels again")
data['label'] = [unique_labels[i] for i in predicted_labels]
fig, ax = plt.subplots(n_plots,figsize=(15,4*n_plots))
for j in range(n_plots):
legend_list = []
for i in range(len(set(predicted_labels))):
extract = data[data.label==unique_labels[i]][cols[j]]
#print(len(extract))
if unique_labels[i]==score[0][0] and score[0][0]!='NML' or unique_labels[i]== 'FAULT':
temp = ax[j].scatter(extract.index,extract,marker='+',s=40)
else:
temp = ax[j].scatter(extract.index,extract,marker='.',s=10)
legend_list.append(temp)
ax[j].legend(legend_list,unique_labels,scatterpoints=3,ncol=1,fontsize=15)
fig.tight_layout()
plt.show()
return score[0][0]
def tester(model,frame):
data = frame
cols = ['A'+str(each+1) for each in range(int(col_len/2))] + ['V'+str(each+1) for each in range(int(col_len/2))]
if data.shape[1]==6:
data.columns = cols
elif data.shape[1]==7:
data.columns = cols + ['label']
data = data[cols]
else:
print("columns length is ",data.shape[1])
test_preds = model.predict(data.values.reshape(-1,1,6).tolist())
predicted_labels = np.argmax(test_preds,axis=1)
data['label'] = predicted_labels
return data
###Output
_____no_output_____
###Markdown
Testing the models
###Code
model_config_train('binary_clf',20,2000,'softmax','./KMTrainingSet/binary/bin_dataset_simulink.csv')
model_config_train('multi_clf',20,2000,'softmax','./KMTrainingSet/multi/mul_dataset_simulink.csv')
binary_labels_list = ['NML','FAULT']
binary_model = load_model('binary_clf.pkl')
multi_labels_list = ['AB', 'AC', 'BC', 'ABC', 'AG', 'BG', 'ABG', 'CG', 'ACG', 'BCG', 'ABCG']
multi_model = load_model('multi_clf.pkl')
import os
# current directory
path = "./TrainingSet/"
# list of file of the given path is assigned to the variable
file_list = [each for each in list(os.walk(path))[0][-1] if ".csv" in each]
checker = []
for each in file_list:
print("\n.\n.\n",each)
temp = tester(binary_model,pd.read_csv('./TrainingSet/'+each))
plotter(temp,binary_labels_list,2)
temp = tester(multi_model,temp[temp.label!=0])
high = plotter(temp,multi_labels_list,2)
if high == ''.join([i for i in each.split(".")[0] if not i.isdigit()]):
checker.append(high)
else:
checker.append('incorrect')
files_failing_model = [file_list[i] for i in range(len(checker)) if checker[i]=='incorrect']
names = [''.join([i for i in each.split(".")[0] if not i.isdigit()]) for each in files_failing_model]
Counter(names)
temp = tester(binary_model,pd.read_csv('./TrainingSet/1AB.csv'))
plotter(temp,binary_labels_list,2)
temp = tester(multi_model,temp[temp.label!=0])
plotter(temp,multi_labels_list,2)
data = pd.read_csv('./TrainingSet/1AG.csv')
round(data['3V'])
dat = Counter((round(data['3V'])/10))
matrics = sorted(zip([each for each in Counter(dat).keys()],Counter(dat).values() ), key=lambda x: x[0])
matrics
import matplotlib.pyplot as plt
from kneed import KneeLocator
from sklearn.datasets import make_blobs
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
from sklearn.preprocessing import StandardScaler
import pandas as pd
data = pd.read_csv('KMTrainingset/2ABG.csv')
features = data[data.columns[:-1]].values.tolist()
#true_labels = data['label'].values.tolist()
scaler = StandardScaler()
scaled_features = scaler.fit_transform(features)
kmeans = KMeans(
init="random",
n_clusters=2,
n_init=10,
max_iter=500,
random_state=42
)
kmeans.fit(scaled_features)
kmeans.cluster_centers_
labels = kmeans.fit_predict(features)
#data['label']=labels
data.head()
dic = Counter(labels)
dic
if dic[1]>dic[0]:
print("1 = 0 , 0 =1")
data['label']=[1 if i == 0 else 0 for i in labels]
else:
print(True)
dic = Counter(data['label'])
data
n_plots = 6
fig, ax = plt.subplots(n_plots,figsize=(15,4*n_plots))
unique_labels = ['NML','Fault']
cols = data.columns[:-1]
for j in range(6):
legend_list = []
for i in list(set(data.label)):
plo = data[data.label == i]
temp = ax[j].scatter(plo.index,plo[cols[j]],marker='+',s=40)
legend_list.append(temp)
ax[j].legend(legend_list,unique_labels,scatterpoints=3,ncol=1,fontsize=15)
fig.tight_layout()
plt.show()
org = [0,1,0,1,1,1,0,0,1,0,1]
[1 if i == 0 else 0 for i in org]
for x,y in zip(org,[1 if i == 0 else 0 for i in org]):
print(x+y)
###Output
_____no_output_____ |
ipynb to run experiments/MC Off Policy on Simple Day.ipynb | ###Markdown
Python Notebook to interact with gym-battery and battery-agentThis python notebook is a working document to interact with and test the environment and the agent.Note: In order for this to work, gym-battery needs to be installed as a package, using pip install -e gym-battery from wherever gym-battery exists.The ipython notebook should exist in battery dispatch by default and should be ableto access those resources so it does not necessarily need to be build/installed using pip.
###Code
import gym
import gym_battery
import numpy as np
import pandas as pd
env = gym.make('gym_battery:battery-v0', **{'N_actions':5})
env.set_standard_system()
import pickle
simple_load = pd.read_pickle("simple_load_1d.pkl")
#simple_load = pd.read_clipboard()
simple_load.value.plot()
env.load.initialize(simple_load)
env.fit_load_to_space()
env.terminal_state
# Show the possible action mapping the agent can take
env.action_mapping
print(env.observation_space.low)
print("to")
print(env.observation_space.high)
# Set how to structure the environment. 'count_days' will generate the a single day as an episode. THe number of days
# given indicates how many differnet days to use.
# This needs to be changed so that it generates LONGER episodes, not DIFFERENT episodes, but this hasn't been done yet.
env.episode_type = 'count_days'
env.run_N_episodes = 1
# Get the do-nothing value for taking no action
def dict_key_by_val(d, val):
for k in d.keys():
if d[k] == val:
return k
raise ValueError("value not found in dictionary")
act0 = dict_key_by_val(env.action_mapping, 0)
act0
''' Set up the agent and the discretizer.'''
from batterydispatch.agent.agents import MonteCarloAgent
from batterydispatch.agent.discretizers import Box_Discretizer
from batterydispatch.agent.policies import do_nothing
agent = MonteCarloAgent()
agent.set_policy(do_nothing, {'do_nothing_action': act0})
# Note, you can change the size of the state sapce by changing the number of buckets, below
agent.set_discretizer(Box_Discretizer(env.observation_space, N=[6, 4, 12, 12]))
agent.actions = env.action_space
agent.learning_rate = 0.05 # used for the updates of the Q estimates
agent.subtype = 'off-policy' # Setup the MC agent for off-policy learning
global eps
eps=0
agent.S_A_values
agent.discretizer.buckets
###Output
_____no_output_____
###Markdown
Plot the day of data that we will be trying to learn from
###Code
done = False
state = env.reset()
i = 0
while not done:
i+=1
_,reward,done, details = env.step(act0)
from matplotlib import pyplot as plt
plt.plot(env.grid_flow.net_flow)
try:
print(list(env.grid_flow.start_date)[0])
except:
pass
print(i)
print(reward)
default_reward = reward
plt.show()
# We then initialize the agent state-action estimates, based on the original billing period.
# We also give the do_nothing action a small bonus of 100, in order to prevent the agent from arbitrarily taking action.
agent.initialize_state_actions(new_default=default_reward,
do_nothing_action = act0,
do_nothing_bonus = 100)
agent.policy_args
###Output
_____no_output_____
###Markdown
Set up the function to run the episodes, and run episodes until convergence.
###Code
from batterydispatch.agent.functions import log_history, run_episode
# We then set the final parameters guiding the episodes: The agents proclivity for random actions,
# the number of episodes without a policy change before we can say we've converge.
agent.set_greedy_policy(eta=0.125)
agent.patience = 10000
agent.name
agent.learning_rate = 0.075
agent.initialize_state_actions(new_default=default_reward,
do_nothing_action = act0,
do_nothing_bonus = 100)
from IPython.display import clear_output
for iteration in [1]:
notes = 'Rerun: Run of a Monte Carlo Off Policy agent on Simple Day with seeds, run for 10,000 episodes: Seed {}'.format(iteration)
agent.set_greedy_policy(eta=0.1)
starting_learning_rate = 0.075
agent.patience_counter = 0
agent.initialize_state_actions(new_default=default_reward,
do_nothing_action = act0,
do_nothing_bonus = 100)
agent.set_seed(iteration)
env.set_seed(iteration)
i=30
eps=0
history = []
while eps < 10001:
i+=1
eps+= 1
if i>30:
i=0
clear_output()
print(notes)
print(eps, end=" | ")
run_episode.run_episodes(env, agent, eps, history, default_reward, random_charge = False, run_type="once")
agent.learning_rate = starting_learning_rate * np.exp(-0.0002*eps)
agent.set_greedy_policy(eta=0)
reward = run_episode.run_episodes(env, agent, eps, history, default_reward, random_charge=False, run_type='once')
log_history.save_results(env, agent, history, reward, scenario = notes, agent_name=agent.name, notes='Iteration {}'.format(iteration))
# Save the state-action value estimates
val = agent.S_A_values.copy()
val = pd.DataFrame.from_dict(val, orient='index')
val = val.reset_index()
val['state'] = [[i.level_0, i.level_1, i.level_2, i.level_3] for ix, i in val.iterrows()]
val = val.rename(columns={"state": "agent_state"})
val.index = val.agent_state
val = val.drop(columns=['level_0', 'level_1', 'level_2', 'level_3', 'agent_state'])
val.index = [tuple(x) for x in val.index]
val
reward = run_episode.run_episodes(env, agent, eps, history, default_reward, random_charge=False, run_type='once')
reward
log_history.save_results(env, agent, history, reward, scenario = notes, agent_name=agent.name, notes='Iteration {}'.format(iteration))
from matplotlib import pyplot as plt
plt.plot(np.exp(-0.0002*np.arange(0,10000))*0.075)
plt.show()
agent.history
Qs = pd.DataFrame.from_dict(agent.S_A_values, orient='index')
Qs.to_clipboard()
counts = pd.DataFrame.from_dict(agent.S_A_frequency, orient='index')
counts.to_clipboard
print(f"The agent converged after {eps} episodes")
###Output
_____no_output_____
###Markdown
Agent has taken between 10 and 30 minutes, and between 700 and 2262 episodes, to converge on day 1. Optimal policy:Current reward of -397414.125 / -406791.825, 5600.0 / 6000.0, patience=21For 2 days, agent took 5 hours 8 minutes, and converged after 21200 episodes. Then we allow the agent to take entirely greedy actions and run the algorithm to see how much the agent learned.
###Code
agent.set_greedy_policy(eta=0)
state = env.reset(random_charge=False)
done = False
while not done:
action = agent.get_action(state, list(env.action_mapping.keys()), 0.25)
#print(state)
#action = int(input("action:"))
#print(action)
state, reward, done, details = env.step(action)
try:
new_demand = max(env.grid_flow.net_flow)
orig_demand = max(env.grid_flow.load)
except AttributeError:
new_demand = "???"
orig_demand = "???"
env.grid_flow['final_reward'] = reward
env.grid_flow['original_reward'] = default_reward
print(f"Current reward of {reward} / {default_reward}, {new_demand} / {orig_demand}, patience={agent.patience_counter}")
DF = save_results(scenario='Day1_load', agent_name='DynaQ', notes="ran the DynaQ agent again on the Day1 data, for a second (same agent)")
pd.to_datetime(DF.saved_timestamp)
sum(DF.index.duplicated())
###Output
_____no_output_____ |
IPython_Notebooks/Fig4_Activity_sleep_1month.ipynb | ###Markdown
Timeseries figure for long recording from WT c57BL/6 mouse: activity and sleep
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set(style="white")
sns.set_context("poster")
###Output
_____no_output_____
###Markdown
Import 1 month of data captured with arduino/Processing programs
###Code
ts_pre = pd.read_csv('../PIRdata/1monthPIRsleep.csv', parse_dates=True, index_col=0) #import data
ts_pre.pop('PIR2') # remove unwarnted/empty columns
ts_pre.pop('PIR4')
ts_pre.pop('PIR5')
ts_pre.pop('PIR6')
ts_pre.pop('Device')
ts_pre.head() # show top of dataframe
ts_cutLDR = ts_pre.truncate(before='2014-08-14 06:59:0', after='2014-08-14 07:02:0') # find start of light period
ts_cutLDR['LDR'].plot(figsize=(16,4))
ts_sH = pd.DataFrame.tshift(ts_pre,-7, freq='H', axis=0) # shift back 7 hours
ts = pd.DataFrame.tshift(ts_sH,-1, freq='T', axis=0) # then shift this back 1 minutes,
# so time is aligned to lights (enviromental time)
###Output
_____no_output_____
###Markdown
Define sleep as period of immobilty of 40s or more (4 x 10second bins)
###Code
# run through trace looking for bouts of sleep (defined as 4 or more sequential '0' values) variable 'a' is dataframe of PIR data
def sleepscan(a,bins):
ss = a.rolling(bins).sum()
y = ss==0
return y.astype(int) # if numerical output is required
def sleep_count(val):
if val == 0:
sleep_count.count = 0
elif val == 1:
sleep_count.count +=1
return sleep_count.count
sleep_count.count = 0 #static variable
ss = sleepscan(ts,4)
ts['count1'] =ss['PIR1'].apply(sleep_count)
ts['count3'] =ss['PIR3'].apply(sleep_count) #new columns in dataframe
ts['InvSleep1'] = (0-ts.count1)/6 # time (minutes approx) of sleep bout
ts['InvSleep3'] = (0-ts.count3)/6
ss.head()
ts_cut =ts.truncate(before='2014-08-14 00:00:00.000000',after='2014-09-14 00:00:00.000000')
# ts_cut.plot(subplots=True, figsize=(24,8)) #uncomment to see plot
ts_week = ts.truncate(before='2014-08-28 00:00:00.000000',after='2014-09-04 00:00:00.000000')
# ts_week.plot(subplots=True, figsize=(24,8)) #uncomment to see plot
ts_day = ts.truncate(before='2014-08-31 00:00:00.000000',after='2014-09-01 00:00:00.000000')
# ts_day.plot(subplots=True, figsize=(24,8)) #uncomment to see plot
###Output
_____no_output_____
###Markdown
Construct a figure showing 1 month, 1 week and 1 day of this data (to reveal density)
###Code
# setup 3 plots for 1 month, 1 week and 1 day of data
ax1 = plt.subplot2grid((9,1), (0,0), rowspan=2)
ax2 = plt.subplot2grid((9,1), (2,0), rowspan=3)
ax3 = plt.subplot2grid((9,1), (5,0), rowspan=4)
# Plot 1 month of data, showing activity, dark period of each day and periods of immobility scored as sleep (downward deflection)
ax1.fill_between(ts_cut.index, 0,ts_cut['PIR1'], label= "Activty",lw=0, facecolor='#002147') # activity
ax1.fill_between(ts_cut.index, np.min(ts_cut['InvSleep1']),100, where=ts_cut.index.hour>=12,lw=0, alpha=0.2, facecolor='#aaaaaa')
ax1.fill_between(ts_cut.index, 0,ts_cut['InvSleep1'],label= "Immobility >40sec", lw=0, facecolor="#030303")
ax1.set_yticks([])
ax1.set_xticklabels([])
ax1.set_frame_on(0)
# Plot 1 week of data
ax2.fill_between(ts_week.index, 0,ts_week['PIR1'], label= "Activty",lw=0, facecolor='#002147')
ax2.fill_between(ts_week.index, np.min(ts_week['InvSleep1']),100, where=ts_week.index.hour>=12,lw=0, alpha=0.2, facecolor='#aaaaaa')
ax2.fill_between(ts_week.index, 0,ts_week['InvSleep1'],label= "Immobility >40sec", lw=0, facecolor="#030303")
ax2.set_yticks([])
ax2.set_xticklabels([])
ax2.set_frame_on(0)
# Plot 1 day of data, with axes
ax3.fill_between(ts_day.index, 0,ts_day['PIR1'], label= "Activty",lw=0, facecolor='#002147')
ax3.fill_between(ts_day.index, -100,100, where=ts_day.index.hour>=12,lw=0, alpha=0.1, facecolor='#aaaaaa')
ax3.fill_between(ts_day.index, 95,100, where=ts_day.index.hour>=12,lw=0, alpha=1, facecolor='#000000')
ax3.fill_between(ts_day.index, 0,ts_day['InvSleep1'],label= "Immobility >40sec", lw=0, facecolor='#030303')
ax3.set_yticks([-100,-50,0, 50,100])
ax3.set_frame_on(1)
plt.tight_layout(h_pad=4)
#Save and show the figure
#plt.savefig('Month_week_day.jpg',format='jpg',transparent=True, dpi=600,pad_inches=0.2,
# frameon=2)
plt.show()
###Output
_____no_output_____ |
.ipynb_checkpoints/Izhkevich-Neurons-checkpoint.ipynb | ###Markdown
Dynamics of the Izhikevich Model
###Code
%matplotlib inline
%load_ext nb_black
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import argrelmax
def reset(p, v, u, i):
"""resets the values for v and u at a given index"""
v[i] = p["c"]
u[i] += p["d"]
def neuron(p, I, t_max, dt):
""" returns the membrane potential and the recovery
variable u during a given time approximated with the
euler method
p : dictionary, contains the parameters a, b, c, d
I : function, I(t) is the current at time t in mA
t_max : float, time length of the simulation in ms
dt : float, time step in ms
"""
# create arrays for t, u, v and set initial values
t = np.arange(0, t_max, dt)
u = np.zeros((len(t), 1))
u[0] = 0
v = np.zeros((len(t), 1))
v[0] = -80
for i in range(len(t) - 1):
if v[i] >= 30:
reset(p, v, u, i)
u[i + 1] = u[i] + dt * p["a"] * (p["b"] * v[i] - u[i])
v[i + 1] = v[i] + dt * (0.04 * (v[i]) ** 2 + 5 * v[i] + 140 - u[i] + I(t[i]))
return t, v, u
def find_spikes(v):
"""returns the indices of spikes in a given simulation array v"""
extrema = argrelmax(v)
return np.extract(v[extrema] > 0, extrema) # exclude maxima that are not spikes
def neuron_fires(p, I, t_max=300, dt=0.2):
"""returns whether the neuron fires with parameters p and current I or not"""
t, v, _ = neuron(p, I, t_max, dt)
return len(find_spikes(v)) > 1
def find_threshold(p, acc):
"""finds the threshold needed for the neuron to fire with a given accuracy"""
# setup arbitray binary search borders
lower = 0
upper = 100
while upper - lower >= acc:
mid = (lower + upper) / 2
I = lambda t: mid
if neuron_fires(p, I):
upper = mid
else:
lower = mid
return upper
def firing_rate(current, p, t_max=200):
"""computes the firing rate in kHz given an input current I and parameters p"""
dt = 0.1
I = lambda t: current
t, v, _ = neuron(p, I, t_max, dt)
if np.max(v) < 0: # no spikes
return 0.0 # return float in any case
else:
spikes = find_spikes(v)
if len(spikes) <= 1:
return 0.0
else:
return (len(spikes) - 1) / (dt * (spikes[-1] - spikes[0]))
###Output
_____no_output_____
###Markdown
Integrator model
###Code
# 1. integrator model
p1 = {"a": 0.1, "b": 0.05, "c": -50, "d": 8}
# determine threshold
I_thresh = find_threshold(p1, 0.001)
print(
f"The minimum current needed to activate the neuron is about {np.round(I_thresh, 3)} mA."
)
# setup current range and plot firing rates
currents = np.arange(I_thresh - 0.1, I_thresh + 1.5, 0.01)
firing_rates = np.vectorize(firing_rate)(currents, p1)
plt.plot(currents, firing_rates, "+")
plt.title("Firing rates as a function of the input current")
plt.xlabel("Input current in mA")
plt.ylabel("firing rate in Hz")
plt.show()
###Output
_____no_output_____
###Markdown
$\Longrightarrow$ The integrator model has typical firing rates for a type 1 neuron.
###Code
# sinusodial input current
def sinusoidal_current(w, I_thresh):
return lambda t: I_thresh - 0.05 + 0.04 * np.sin(w * t)
def sinusoidal_neuron(p, omegas, t_max, dt, I_thresh):
"""simulates a neuron with sinusoidal current
p : paramters for the simulation
omegas: array with values for w
t_max, dt : length of simulation and time step
I_thresh : current threshold to activate neuron
returns time array and a list of voltage data"""
simulations = []
for i in range(len(omegas)):
w = omegas[i]
I = sinusoidal_current(w, I_thresh)
t, v, _ = neuron(p, I, t_max, dt)
simulations.append(v)
return np.arange(0, t_max, dt), simulations
# simulation with sinusoidal current
t_max = 200
dt = 0.1
omegas = np.arange(0.1, 1, 0.1)
I_thresh = find_threshold(p1, 0.001)
# plot
fig = plt.figure(figsize=(15, 10))
plt.title(r"Sinusoidal current with respect to $\omega$")
plt.xlabel("time in ms")
plt.ylabel("current in mA")
t, simulations = sinusoidal_neuron(p1, omegas, t_max, dt, I_thresh)
for i in range(len(omegas)):
plt.plot(t[1000:], simulations[i][1000:], label=f"w = {np.round(omegas[i], 3)}")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The voltage can be approximate with $v(t) \approx v_0 + A( \omega ) \sin (\omega t - \varphi)$.Given the plot we may take an educated guess for the transfer function. The amplitudes are decreasing as $\omega$ increases. We may guess$$A( \omega ) = \frac{1}{c \cdot \omega}, \ where \ c \in \mathbb{R}.$$We'll show that this approximation is a good one. However, it should be noted that there are other functions that would approximate the transfer function just as good or even better. Our approximation is not good for very small values of $\omega$, as the amplitude will never exceed 0.5.
###Code
def amplitude(V):
"""returns the amplitude of a given sinusoidal signal V"""
return (np.max(V) - np.min(V)) / 2
def A(c):
return lambda w: 1 / (c * w)
def transfer(p, acc, t_max, w_max, dt, dw, I_thresh):
"""approximates c for the transfer function up to a given accuracy
returns c (float) and the amplitudes(2d array)"""
# simulate neuron for various values of omega
omegas = np.arange(0 + dw, w_max, dw)
t, simulations = sinusoidal_neuron(p, omegas, t_max, dt, I_thresh)
amplitudes = np.zeros((2, len(omegas)))
for i in range(len(omegas)):
amplitudes[0][i] = omegas[i]
amplitudes[1][i] = amplitude(simulations[i][1000:])
# setup binary search
lower = 0
upper = 20
while upper - lower >= acc:
mid = A((lower + upper) / 2)
if np.sum(mid(amplitudes[0]) - amplitudes[1]) > 0: # no need for squares
lower = (lower + upper) / 2
else:
upper = (lower + upper) / 2
return lower, amplitudes
# plot transfer function against the simulated amplitudes
I_thresh = find_threshold(p1, 0.001)
t_max = 1000
w_max = 1
dt = 0.1
dw = 0.05
omegas = np.arange(0 + dw, w_max, dw)
c, amps = transfer(p1, 0 + dw, t_max, w_max, dt, dw, I_thresh)
fig = plt.figure(figsize=(15, 5))
plt.plot(amps[0], amps[1], "+", label="simulated amplitude")
plt.plot(omegas, A(c)(omegas), label="approximation")
plt.xlabel("$\omega$")
plt.title("Transfer function approximation")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Resonator model
###Code
# 2. resonator model
p2 = {"a": 0.1, "b": 0.26, "c": -65, "d": 2}
# calculate and plot threshold
# determine threshold
I_thresh2 = find_threshold(p2, 0.01)
print(
"The minimum current needed to activate "
f"the neuron is about {np.round(I_thresh2, 3)} mA."
)
# setup current range and plot firing rates
t_max = 400
dt = 0.01
currents = np.arange(0, I_thresh2 + 1, 0.01)
firing_rates = np.vectorize(firing_rate)(currents, p2)
plt.plot(currents, firing_rates, "+")
plt.title("Firing rates as a function of the input current")
plt.xlabel("Input current in mA")
plt.ylabel("firing rate in Hz")
plt.show()
###Output
_____no_output_____
###Markdown
// It seems like the threshold should be around 0.16.$\Longrightarrow$ The resonator model shows type 2 firing rates.
###Code
# set parameters
I_thresh2 = find_threshold(p2, 0.001)
t_max = 300
w_min = 0.0
w_max = 0.9
dt = 0.1
dw = 0.02
omegas = np.arange(w_min, w_max, dw)
I = lambda t: I_thresh2 + 0.02
# simulate
# we should not need to add anything to the threshold !
t, v, _ = neuron(p2, I, t_max, dt)
c = sinusoidal_current(0.2, I_thresh2)
curr = np.vectorize(c)(t)
t, simulations = sinusoidal_neuron(p2, omegas, t_max, dt, I_thresh2 + 0.02)
# plot current and simulations
fig, ax = plt.subplots(3, 1, figsize=(15, 10))
ax[0].title.set_text(
f"sinusoidal current with a threshold of {np.round(I_thresh2, 3)} mA"
)
ax[0].plot(t, curr)
ax[1].title.set_text(
f"simulation with a constant current of {np.round(I_thresh2, 3)}mA"
)
ax[1].plot(t, v)
ax[2].title.set_text(
"simulation with sinusoidal current, "
+ r"$\omega$"
+ f" between {w_min} and {w_max}, stepsize = {dw}"
)
for i in range(len(omegas)):
if simulations[i].max() > 0:
ax[2].plot(t, simulations[i], label=f"w = {np.round(omegas[i], 3)}")
else:
ax[2].plot(t, simulations[i])
ax[2].legend()
plt.show()
###Output
_____no_output_____
###Markdown
Firing Rate AdaptationSome neurons will show a period of rapid firing before settling for a stable firing rate. This process is called adaptation and can be modeled with the Izhikevich neuron as well.
###Code
p3 = {"a": 0.003, "b": 0, "c": -65, "d": 0.2}
def step_function(levels, t_max, dt):
"""generates a clamped current function
levels : dict
entries are t: c, where t is the time when the current is set to c
t=0 is preliminary, following entries should be sorted by time.
t_max : int
time in ms that the neuron should be simulated
dt : float
time step
"""
# iterate over levels and populate current array
sortedLevels = sorted(list(levels.items()))
c = sortedLevels[0][1] * np.ones((int(t_max / dt), 1))
for key, val in sortedLevels:
if key != 0:
c[(int(int(key) / dt)) :] = val
return lambda t: c[int(t / dt)]
# simulation with step current
t_max = 500
dt = 0.01
currents3 = {
0: 16,
100: 18,
}
I_3 = step_function(currents3, t_max, dt)
t, v, u = neuron(p3, I_3, t_max, dt)
current = np.vectorize(I_3)(t)
# plot
# plot
fig, ax = plt.subplots(2, 1, figsize=(15, 10))
ax[0].title.set_text("Neuron acitivity with current clamp")
fig.text(0.5, 0.04, "time in ms", ha="center", va="center")
ax[0].plot(t, v)
ax[0].set_ylabel("membrane potential in mV")
ax[1].plot(t, current)
ax[1].set_ylabel("current in mA")
plt.show()
###Output
_____no_output_____
###Markdown
Chattering neuron model
###Code
def chattering_neuron(p, d, I, t_max, dt):
""" returns the membrane potential during a given time approximated with the euler method
simulation with a decreasing parameter d
p : dictionary, contains the parameters a, b, c
d : function of the d values(float) in time
I : function, I(t) is the current at time t in pA
t_max : float, time length of the simulation in ms
dt : float, time step in ms
"""
# create arrays for t, u, v and set initial values
t = np.arange(0, t_max, dt)
u = np.zeros((len(t), 1))
u[0] = 0
v = np.zeros((len(t), 1))
v[0] = -80
for i in range(len(t) - 1):
if v[i] >= 30:
chattering_reset(p, d, v, u, i, dt)
u[i + 1] = u[i] + dt * p["a"] * (p["b"] * v[i] - u[i])
v[i + 1] = v[i] + dt * (0.04 * (v[i]) ** 2 + 5 * v[i] + 140 - u[i] + I(t[i]))
return t, v, u
def chattering_reset(p, d, v, u, i, dt):
"""resets the values for v and u at a given index, d is dynamically updated"""
v[i] = p["c"]
u[i] += d(i * dt)
# simulate with decreasing reset parameter d
p4 = {
"a": 0.02,
"b": 0.2,
"c": -50,
}
d_dict = {0: 8, 150: 6, 300: 4, 450: 2}
t_max = 600
dt = 0.01
d = step_function(d_dict, t_max, dt)
I_4 = lambda t: 5
t, v, u = chattering_neuron(p4, d, I_4, t_max, dt)
d_array = np.vectorize(d)(t)
# plot
fig, ax = plt.subplots(2, 1, figsize=(15, 10))
ax[0].title.set_text("Neuron activity with decreasing reset parameter")
fig.text(0.5, 0.04, "time in ms", ha="center", va="center")
ax[0].plot(t, v)
ax[0].set_ylabel("membrane potential in mV")
ax[1].plot(t, d_array)
ax[1].set_ylabel("reset parameter d")
plt.show()
###Output
_____no_output_____
###Markdown
AnimationsTo have a better understanding of the neuron dynamics, we'll display the phase plane changing over time.You'll propbably need to run these twice after resetting the kernel.
###Code
%matplotlib notebook
%matplotlib notebook
import matplotlib.pyplot as plt
import matplotlib.animation as animation
# phase plane showing firing rate adaptation
# nullclines and equilibria
uNull = lambda v, p: p["b"] * v
vNull = lambda v, I: 0.04 * (v ** 2) + (5 * v) + 140 + I
eqV = lambda p, I: np.roots([0.04, 5 - p["b"], 140 + I])
# simulation with step current
t_max = 500
dt = 0.01
currents3 = {
0: 16,
100: 18,
}
I_3 = step_function(currents3, t_max, dt)
t, v, u = neuron(p3, I_3, t_max, dt)
# create figure
fig = plt.figure(figsize=(10, 8), num="Phase Plane for stepwise increased stimulus")
ax1 = fig.add_subplot(1, 1, 1)
vspan = np.arange(-80, 40, 0.1)
# create animation
def animate(i):
if i == 0:
pass
ax1.clear()
ax1.set_ylabel("u", fontsize=10)
ax1.set_xlabel("v in mV", fontsize=10)
ax1.set_xlim(-80, 40)
ax1.set_ylim(-0.05 * np.max(u), 1.3 * np.max(u))
ax1.plot(v[: (i - 1) * 200], u[: (i - 1) * 200], "k+")
ax1.plot(
v[(i - 1) * 200 : i * 200],
u[(i - 1) * 200 : i * 200],
color="magenta",
marker="+",
label=str(2 * i) + " ms",
linewidth=5.0,
)
ax1.plot(
vspan,
uNull(vspan, p3),
color="r",
ls="dotted",
label="u-nullcline",
linewidth=7.0,
)
ax1.plot(
vspan,
vNull(vspan, current[i * 200]),
color="g",
ls="dotted",
label="v-nullcline",
linewidth=7.0,
)
ax1.plot(
30 * np.ones(10),
np.arange(0, 2, 0.2),
color="black",
ls="dotted",
label="reset threshold",
)
ax1.legend(loc="upper right")
ani = animation.FuncAnimation(fig, animate, frames=250, repeat=True)
# save animation
Writer = animation.writers["imagemagick"]
writer = Writer(fps=20, bitrate=900)
# ani.save('.//phase_plane3.gif', writer=writer)
# Phase portrait for chattering neuron
# create figure
fig = plt.figure(
figsize=(10, 8), num="Phase Plane for stepwise decreasing chattering parameter d"
)
ax1 = fig.add_subplot(1, 1, 1)
ax1.set_xlim(np.min(v), 1.1 * np.max(v))
ax1.set_ylabel("u", fontsize=10)
ax1.set_xlabel("v in mV", fontsize=10)
plt.title("Phase plane", fontsize=20)
vspan = np.linspace(np.min(v), 1.1 * np.max(v), 1000)
# simulate chattering neuron
t_max = 600
dt = 0.01
d = step_function(d_dict, t_max, dt)
I_4 = lambda t: 5
t, v, u = chattering_neuron(p4, d, I_4, t_max, dt)
# create animation
def animate(i):
if i == 0:
pass
ax1.clear()
ax1.set_xlim(-90, 50)
ax1.set_ylim(-12, 0)
ax1.set_ylabel("u", fontsize=10)
ax1.set_xlabel("v in mV", fontsize=10)
ax1.plot(v[: (i - 1) * 200], u[: (i - 1) * 200], "k+")
ax1.plot(
v[(i - 1) * 200 : i * 200],
u[(i - 1) * 200 : i * 200],
color="magenta",
ls="dotted",
label=str(2 * i) + " ms",
linewidth=5.0,
)
ax1.plot(
vspan,
uNull(vspan, p4),
color="r",
ls="dotted",
label="u-nullcline",
linewidth=7.0,
)
ax1.plot(
30 * np.ones(12),
np.arange(-12, 0, 1),
color="black",
ls="dotted",
label="reset threshold",
)
ax1.plot(
vspan,
vNull(vspan, I_4(i * 200)),
color="g",
ls="dotted",
label="v-nullcline",
linewidth=7.0,
)
ax1.legend(loc="upper right")
ani = animation.FuncAnimation(fig, animate, frames=300, repeat=True)
# save animation
Writer = animation.writers["imagemagick"]
writer = Writer(fps=20, bitrate=900)
# ani.save('.//phase_plane4.gif', writer=writer)
###Output
_____no_output_____ |
nbs/200_optuna.ipynb | ###Markdown
Optuna: A hyperparameter optimization framework> Optuna is an automatic hyperparameter optimization software framework, particularly designed for machine learning. It features an imperative, define-by-run style user API. Thanks to our define-by-run API, the code written with Optuna enjoys high modularity, and the user of Optuna can dynamically construct the search spaces for the hyperparameters.
###Code
#export
from pathlib import Path
from fastcore.script import *
import joblib
from tsai.imports import *
from importlib import import_module
import warnings
warnings.filterwarnings("ignore")
#exports
def run_optuna_study(objective, resume=None, study_type=None, multivariate=True, search_space=None, evaluate=None, seed=None, sampler=None, pruner=None, study_name=None,
direction='maximize', load_if_exists=False, n_trials=None, timeout=None, gc_after_trial=False, show_progress_bar=True, save_study=True,
path='optuna', show_plots=True):
r"""Creates and runs an optuna study.
Args:
objective: A callable that implements objective function.
resume: Path to a previously saved study.
study_type: Type of study selected (bayesian, gridsearch, randomsearch). Based on this a sampler will be build if sampler is None.
If a sampler is passed, this has no effect.
multivariate: If this is True, the multivariate TPE is used when suggesting parameters. The multivariate TPE is reported to outperform
the independent TPE.
search_space: Search space required when running a gridsearch (if you don't pass a sampler).
evaluate: Allows you to pass a specific set of hyperparameters that will be evaluated.
seed: Fixed seed used by samplers.
sampler: A sampler object that implements background algorithm for value suggestion. If None is specified, TPESampler is used during
single-objective optimization and NSGAIISampler during multi-objective optimization. See also samplers.
pruner: A pruner object that decides early stopping of unpromising trials. If None is specified, MedianPruner is used as the default.
See also pruners.
study_name: Study’s name. If this argument is set to None, a unique name is generated automatically.
direction: A sequence of directions during multi-objective optimization.
n_trials: The number of trials. If this argument is set to None, there is no limitation on the number of trials. If timeout is also set to
None, the study continues to create trials until it receives a termination signal such as Ctrl+C or SIGTERM.
timeout: Stop study after the given number of second(s). If this argument is set to None, the study is executed without time limitation.
If n_trials is also set to None, the study continues to create trials until it receives a termination signal such as
Ctrl+C or SIGTERM.
gc_after_trial: Flag to execute garbage collection at the end of each trial. By default, garbage collection is enabled, just in case.
You can turn it off with this argument if memory is safely managed in your objective function.
show_progress_bar: Flag to show progress bars or not. To disable progress bar, set this False.
save_study: Save your study when finished/ interrupted.
path: Folder where the study will be saved.
show_plots: Flag to control whether plots are shown at the end of the study.
"""
try: import optuna
except ImportError: raise ImportError('You need to install optuna!')
# Sampler
if sampler is None:
if study_type is None or "bayes" in study_type.lower():
sampler = optuna.samplers.TPESampler(seed=seed, multivariate=multivariate)
elif "grid" in study_type.lower():
assert search_space, f"you need to pass a search_space dict to run a gridsearch"
sampler = optuna.samplers.GridSampler(search_space)
elif "random" in study_type.lower():
sampler = optuna.samplers.RandomSampler(seed=seed)
assert sampler, "you need to either select a study type (bayesian, gridsampler, randomsampler) or pass a sampler"
# Study
if resume:
try:
study = joblib.load(resume)
except:
print(f"joblib.load({resume}) couldn't recover any saved study. Check the path.")
return
print("Best trial until now:")
print(" Value: ", study.best_trial.value)
print(" Params: ")
for key, value in study.best_trial.params.items():
print(f" {key}: {value}")
else:
study = optuna.create_study(sampler=sampler, pruner=pruner, study_name=study_name, direction=direction)
if evaluate: study.enqueue_trial(evaluate)
try:
study.optimize(objective, n_trials=n_trials, timeout=timeout, gc_after_trial=gc_after_trial, show_progress_bar=show_progress_bar)
except KeyboardInterrupt:
pass
# Save
if save_study:
full_path = Path(path)/f'{study.study_name}.pkl'
full_path.parent.mkdir(parents=True, exist_ok=True)
joblib.dump(study, full_path)
print(f'\nOptuna study saved to {full_path}')
print(f"To reload the study run: study = joblib.load('{full_path}')")
# Plots
if show_plots and len(study.trials) > 1:
try: display(optuna.visualization.plot_optimization_history(study))
except: pass
try: display(optuna.visualization.plot_param_importances(study))
except: pass
try: display(optuna.visualization.plot_slice(study))
except: pass
try: display(optuna.visualization.plot_parallel_coordinate(study))
except: pass
# Study stats
try:
pruned_trials = [t for t in study.trials if t.state == optuna.trial.TrialState.PRUNED]
complete_trials = [t for t in study.trials if t.state == optuna.trial.TrialState.COMPLETE]
print(f"\nStudy statistics : ")
print(f" Study name : {study.study_name}")
print(f" # finished trials : {len(study.trials)}")
print(f" # pruned trials : {len(pruned_trials)}")
print(f" # complete trials : {len(complete_trials)}")
print(f"\nBest trial :")
trial = study.best_trial
print(f" value : {trial.value}")
print(f" best_params = {trial.params}\n")
except:
print('\nNo finished trials yet.')
return study
#hide
from tsai.imports import create_scripts
from tsai.export import get_nb_name
nb_name = get_nb_name()
create_scripts(nb_name);
###Output
_____no_output_____
###Markdown
Optuna: A hyperparameter optimization framework> Optuna is an automatic hyperparameter optimization software framework, particularly designed for machine learning. It features an imperative, define-by-run style user API. Thanks to our define-by-run API, the code written with Optuna enjoys high modularity, and the user of Optuna can dynamically construct the search spaces for the hyperparameters.
###Code
#export
from pathlib import Path
from fastcore.script import *
import joblib
from importlib import import_module
from tsai.imports import *
import warnings
warnings.filterwarnings("ignore")
#exports
def run_optuna_study(objective, resume=None, study_type=None, multivariate=True, search_space=None, evaluate=None, seed=None, sampler=None, pruner=None,
study_name=None, direction='maximize', load_if_exists=False, n_trials=None, timeout=None, gc_after_trial=False, show_progress_bar=True,
save_study=True, path='optuna', show_plots=True):
r"""Creates and runs an optuna study.
Args:
objective: A callable that implements objective function.
resume: Path to a previously saved study.
study_type: Type of study selected (bayesian, gridsearch, randomsearch). Based on this a sampler will be build if sampler is None.
If a sampler is passed, this has no effect.
multivariate: If this is True, the multivariate TPE is used when suggesting parameters. The multivariate TPE is reported to outperform
the independent TPE.
search_space: Search space required when running a gridsearch (if you don't pass a sampler).
evaluate: Allows you to pass a specific set of hyperparameters that will be evaluated.
seed: Fixed seed used by samplers.
sampler: A sampler object that implements background algorithm for value suggestion. If None is specified, TPESampler is used during
single-objective optimization and NSGAIISampler during multi-objective optimization. See also samplers.
pruner: A pruner object that decides early stopping of unpromising trials. If None is specified, MedianPruner is used as the default.
See also pruners.
study_name: Study’s name. If this argument is set to None, a unique name is generated automatically.
direction: A sequence of directions during multi-objective optimization.
n_trials: The number of trials. If this argument is set to None, there is no limitation on the number of trials. If timeout is also set to
None, the study continues to create trials until it receives a termination signal such as Ctrl+C or SIGTERM.
timeout: Stop study after the given number of second(s). If this argument is set to None, the study is executed without time limitation.
If n_trials is also set to None, the study continues to create trials until it receives a termination signal such as
Ctrl+C or SIGTERM.
gc_after_trial: Flag to execute garbage collection at the end of each trial. By default, garbage collection is enabled, just in case.
You can turn it off with this argument if memory is safely managed in your objective function.
show_progress_bar: Flag to show progress bars or not. To disable progress bar, set this False.
save_study: Save your study when finished/ interrupted.
path: Folder where the study will be saved.
show_plots: Flag to control whether plots are shown at the end of the study.
"""
try: import optuna
except ImportError: raise ImportError('You need to install optuna to use run_optuna_study')
# Sampler
if sampler is None:
if study_type is None or "bayes" in study_type.lower():
sampler = optuna.samplers.TPESampler(seed=seed, multivariate=multivariate)
elif "grid" in study_type.lower():
assert search_space, f"you need to pass a search_space dict to run a gridsearch"
sampler = optuna.samplers.GridSampler(search_space)
elif "random" in study_type.lower():
sampler = optuna.samplers.RandomSampler(seed=seed)
assert sampler, "you need to either select a study type (bayesian, gridsampler, randomsampler) or pass a sampler"
# Study
if resume:
try:
study = joblib.load(resume)
except:
print(f"joblib.load({resume}) couldn't recover any saved study. Check the path.")
return
print("Best trial until now:")
print(" Value: ", study.best_trial.value)
print(" Params: ")
for key, value in study.best_trial.params.items():
print(f" {key}: {value}")
else:
study = optuna.create_study(sampler=sampler, pruner=pruner, study_name=study_name, direction=direction)
if evaluate: study.enqueue_trial(evaluate)
try:
study.optimize(objective, n_trials=n_trials, timeout=timeout, gc_after_trial=gc_after_trial, show_progress_bar=show_progress_bar)
except KeyboardInterrupt:
pass
# Save
if save_study:
full_path = Path(path)/f'{study.study_name}.pkl'
full_path.parent.mkdir(parents=True, exist_ok=True)
joblib.dump(study, full_path)
print(f'\nOptuna study saved to {full_path}')
print(f"To reload the study run: study = joblib.load('{full_path}')")
# Plots
if show_plots and len(study.trials) > 1:
try: display(optuna.visualization.plot_optimization_history(study))
except: pass
try: display(optuna.visualization.plot_param_importances(study))
except: pass
try: display(optuna.visualization.plot_slice(study))
except: pass
try: display(optuna.visualization.plot_parallel_coordinate(study))
except: pass
# Study stats
try:
pruned_trials = [t for t in study.trials if t.state == optuna.trial.TrialState.PRUNED]
complete_trials = [t for t in study.trials if t.state == optuna.trial.TrialState.COMPLETE]
print(f"\nStudy statistics : ")
print(f" Study name : {study.study_name}")
print(f" # finished trials : {len(study.trials)}")
print(f" # pruned trials : {len(pruned_trials)}")
print(f" # complete trials : {len(complete_trials)}")
print(f"\nBest trial :")
trial = study.best_trial
print(f" value : {trial.value}")
print(f" best_params = {trial.params}\n")
except:
print('\nNo finished trials yet.')
return study
#hide
from tsai.imports import *
from tsai.export import *
nb_name = get_nb_name()
# nb_name = "200_optuna.ipynb"
create_scripts(nb_name);
###Output
_____no_output_____ |
section_4/4-6.ipynb | ###Markdown
Density estimationDensity estimation is the construction of an estimate, based on observed data, of an unobservable underlying probability density function. The unobservable density function is thought of as the density according to which a large population is distributed; the data are usually thought of as a random sample from that population.Density estimation walks the line between unsupervised learning, featureengineering, and data modeling. Some of the most popular and usefuldensity estimation techniques are mixture models such as GaussianMixtures (`sklearn.mixture.GaussianMixture`), and neighbor-based approaches such as the kernel densityestimate (`sklearn.neighbors.KernelDensity`).Density estimation is a very simple concept, and most people are alreadyfamiliar with one common density estimation technique: the histogram. HistogramsA histogram is a simple visualization of data where bins are defined,and the number of data points within each bin is tallied. An example ofa histogram can be seen in the upper-left panel of the following figure:
###Code
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from distutils.version import LooseVersion
from scipy.stats import norm
from sklearn.neighbors import KernelDensity
# `normed` is being deprecated in favor of `density` in histograms
if LooseVersion(matplotlib.__version__) >= '2.1':
density_param = {'density': True}
else:
density_param = {'normed': True}
# ----------------------------------------------------------------------
# Plot the progression of histograms to kernels
np.random.seed(1)
N = 20
X = np.concatenate((np.random.normal(0, 1, int(0.3 * N)),
np.random.normal(5, 1, int(0.7 * N))))[:, np.newaxis]
X_plot = np.linspace(-5, 10, 1000)[:, np.newaxis]
bins = np.linspace(-5, 10, 10)
fig, ax = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(13, 13))
fig.subplots_adjust(hspace=0.05, wspace=0.05)
# histogram 1
ax[0, 0].hist(X[:, 0], bins=bins, fc='#AAAAFF', **density_param)
ax[0, 0].text(-3.5, 0.31, "Histogram")
# histogram 2
ax[0, 1].hist(X[:, 0], bins=bins + 0.75, fc='#AAAAFF', **density_param)
ax[0, 1].text(-3.5, 0.31, "Histogram, bins shifted")
# tophat KDE
kde = KernelDensity(kernel='tophat', bandwidth=0.75).fit(X)
log_dens = kde.score_samples(X_plot)
ax[1, 0].fill(X_plot[:, 0], np.exp(log_dens), fc='#AAAAFF')
ax[1, 0].text(-3.5, 0.31, "Tophat Kernel Density")
# Gaussian KDE
kde = KernelDensity(kernel='gaussian', bandwidth=0.75).fit(X)
log_dens = kde.score_samples(X_plot)
ax[1, 1].fill(X_plot[:, 0], np.exp(log_dens), fc='#AAAAFF')
ax[1, 1].text(-3.5, 0.31, "Gaussian Kernel Density")
for axi in ax.ravel():
axi.plot(X[:, 0], np.full(X.shape[0], -0.01), '+k')
axi.set_xlim(-4, 9)
axi.set_ylim(-0.02, 0.34)
for axi in ax[:, 0]:
axi.set_ylabel('Normalized Density')
for axi in ax[1, :]:
axi.set_xlabel('x')
plt.show()
###Output
_____no_output_____
###Markdown
A major problem with histograms, however, is that the choice of binningcan have a disproportionate effect on the resulting visualization.Consider the upper-right panel of the above figure. It shows a histogramover the same data, with the bins shifted right. The results of the twovisualizations look entirely different, and might lead to differentinterpretations of the data.Intuitively, one can also think of a histogram as a stack of blocks, oneblock per point. By stacking the blocks in the appropriate grid space,we recover the histogram. But what if, instead of stacking the blocks ona regular grid, we center each block on the point it represents, and sumthe total height at each location? This idea leads to the lower-leftvisualization. It is perhaps not as clean as a histogram, but the factthat the data drive the block locations mean that it is a much betterrepresentation of the underlying data.This visualization is an example of a *kernel density estimation*, inthis case with a top-hat kernel (i.e. a square block at each point). Wecan recover a smoother distribution by using a smoother kernel. Thebottom-right plot shows a Gaussian kernel density estimate, in whicheach point contributes a Gaussian curve to the total. The result is asmooth density estimate which is derived from the data, and functions asa powerful non-parametric model of the distribution of points. Kernel Density EstimationKernel density estimation is a fundamental data smoothing problem where inferences about the population are made, based on a finite data sample. In some fields such as signal processing and econometrics it is also termed the *Parzen–Rosenblatt window*.Kernel density estimation in scikit-learn is implemented in the`sklearn.neighbors.KernelDensity` estimator, which uses the Ball Tree or KD Tree for efficient queries. Though the above example uses a 1D data set for simplicity,kernel density estimation can be performed in any number of dimensions,though in practice the curse of dimensionality causes its performance todegrade in high dimensions.In the following figure, 100 points are drawn from a bimodaldistribution, and the kernel density estimates are shown for threechoices of kernels:
###Code
from sklearn.neighbors import KernelDensity
N = 100
np.random.seed(1)
X = np.concatenate((np.random.normal(0, 1, int(0.3 * N)),
np.random.normal(5, 1, int(0.7 * N))))[:, np.newaxis]
X_plot = np.linspace(-5, 10, 1000)[:, np.newaxis]
true_dens = (0.3 * norm(0, 1).pdf(X_plot[:, 0])
+ 0.7 * norm(5, 1).pdf(X_plot[:, 0]))
fig, ax = plt.subplots(figsize=(13, 13))
ax.fill(X_plot[:, 0], true_dens, fc='black', alpha=0.2,
label='input distribution')
colors = ['navy', 'cornflowerblue', 'darkorange']
kernels = ['gaussian', 'tophat', 'epanechnikov']
lw = 2
for color, kernel in zip(colors, kernels):
kde = KernelDensity(kernel=kernel, bandwidth=0.5).fit(X)
log_dens = kde.score_samples(X_plot)
ax.plot(X_plot[:, 0], np.exp(log_dens), color=color, lw=lw,
linestyle='-', label="kernel = '{0}'".format(kernel))
ax.text(6, 0.38, "N={0} points".format(N))
ax.legend(loc='upper left')
ax.plot(X[:, 0], -0.005 - 0.01 * np.random.random(X.shape[0]), '+k')
ax.set_xlim(-4, 9)
ax.set_ylim(-0.02, 0.4)
plt.show()
###Output
_____no_output_____
###Markdown
It's clear how the kernel shape affects the smoothness of the resultingdistribution. The scikit-learn kernel density estimator can be used asfollows:
###Code
from sklearn.neighbors import KernelDensity
import numpy as np
X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
kde = KernelDensity(kernel='gaussian', bandwidth=0.2).fit(X)
kde.score_samples(X)
###Output
_____no_output_____
###Markdown
Here we have used `kernel='gaussian'`, as seen above. Mathematically, akernel is a positive function $K(x;h)$ which is controlled by thebandwidth parameter $h$. Given this kernel form, the density estimate ata point $y$ within a group of points $x_i; i=1\cdots N$ is given by:$$\rho_K(y) = \sum_{i=1}^{N} K(y - x_i; h)$$The bandwidth here acts as a smoothing parameter, controlling thetradeoff between bias and variance in the result. A large bandwidthleads to a very smooth (i.e. high-bias) density distribution. A smallbandwidth leads to an unsmooth (i.e. high-variance) densitydistribution.`sklearn.neighbors.KernelDensity` implements several common kernel forms, which are shown in the followingfigure:
###Code
X_plot = np.linspace(-6, 6, 1000)[:, None]
X_src = np.zeros((1, 1))
fig, ax = plt.subplots(2, 3, sharex=True, sharey=True, figsize=(13, 13))
fig.subplots_adjust(left=0.05, right=0.95, hspace=0.05, wspace=0.05)
def format_func(x, loc):
if x == 0:
return '0'
elif x == 1:
return 'h'
elif x == -1:
return '-h'
else:
return '%ih' % x
for i, kernel in enumerate(['gaussian', 'tophat', 'epanechnikov',
'exponential', 'linear', 'cosine']):
axi = ax.ravel()[i]
log_dens = KernelDensity(kernel=kernel).fit(X_src).score_samples(X_plot)
axi.fill(X_plot[:, 0], np.exp(log_dens), '-k', fc='#AAAAFF')
axi.text(-2.6, 0.95, kernel)
axi.xaxis.set_major_formatter(plt.FuncFormatter(format_func))
axi.xaxis.set_major_locator(plt.MultipleLocator(1))
axi.yaxis.set_major_locator(plt.NullLocator())
axi.set_ylim(0, 1.05)
axi.set_xlim(-2.9, 2.9)
ax[0, 1].set_title('Available Kernels')
plt.show()
###Output
_____no_output_____
###Markdown
The form of these kernels is as follows:- Gaussian kernel (`kernel = 'gaussian'`) $K(x; h) \propto \exp(- \frac{x^2}{2h^2} )$- Tophat kernel (`kernel = 'tophat'`) $K(x; h) \propto 1$ if $x < h$- Epanechnikov kernel (`kernel = 'epanechnikov'`) $K(x; h) \propto 1 - \frac{x^2}{h^2}$- Exponential kernel (`kernel = 'exponential'`) $K(x; h) \propto \exp(-x/h)$- Linear kernel (`kernel = 'linear'`) $K(x; h) \propto 1 - x/h$ if $x < h$- Cosine kernel (`kernel = 'cosine'`) $K(x; h) \propto \cos(\frac{\pi x}{2h})$ if $x < h$The kernel density estimator can be used with any of the valid distancemetrics (see `sklearn.neighbors.DistanceMetric` for a list of available metrics), though the results areproperly normalized only for the Euclidean metric. One particularlyuseful metric is the [Haversinedistance](https://en.wikipedia.org/wiki/Haversine_formula) whichmeasures the angular distance between points on a sphere. Here is anexample of using a kernel density estimate for a visualization ofgeospatial data, in this case the distribution of observations of twodifferent species on the South American continent:
###Code
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_species_distributions
from sklearn.neighbors import KernelDensity
# if basemap is available, we'll use it.
# otherwise, we'll improvise later...
try:
from mpl_toolkits.basemap import Basemap
basemap = True
except ImportError:
basemap = False
def construct_grids(batch):
"""Construct the map grid from the batch object
Parameters
----------
batch : Batch object
The object returned by :func:`fetch_species_distributions`
Returns
-------
(xgrid, ygrid) : 1-D arrays
The grid corresponding to the values in batch.coverages
"""
# x,y coordinates for corner cells
xmin = batch.x_left_lower_corner + batch.grid_size
xmax = xmin + (batch.Nx * batch.grid_size)
ymin = batch.y_left_lower_corner + batch.grid_size
ymax = ymin + (batch.Ny * batch.grid_size)
# x coordinates of the grid cells
xgrid = np.arange(xmin, xmax, batch.grid_size)
# y coordinates of the grid cells
ygrid = np.arange(ymin, ymax, batch.grid_size)
return (xgrid, ygrid)
# Get matrices/arrays of species IDs and locations
data = fetch_species_distributions()
species_names = ['Bradypus Variegatus', 'Microryzomys Minutus']
Xtrain = np.vstack([data['train']['dd lat'],
data['train']['dd long']]).T
ytrain = np.array([d.decode('ascii').startswith('micro')
for d in data['train']['species']], dtype='int')
Xtrain *= np.pi / 180. # Convert lat/long to radians
# Set up the data grid for the contour plot
xgrid, ygrid = construct_grids(data)
X, Y = np.meshgrid(xgrid[::5], ygrid[::5][::-1])
land_reference = data.coverages[6][::5, ::5]
land_mask = (land_reference > -9999).ravel()
xy = np.vstack([Y.ravel(), X.ravel()]).T
xy = xy[land_mask]
xy *= np.pi / 180.
# Plot map of South America with distributions of each species
fig = plt.figure(figsize=(13, 13))
fig.subplots_adjust(left=0.05, right=0.95, wspace=0.05)
for i in range(2):
plt.subplot(1, 2, i + 1)
# construct a kernel density estimate of the distribution
print(" - computing KDE in spherical coordinates")
kde = KernelDensity(bandwidth=0.04, metric='haversine',
kernel='gaussian', algorithm='ball_tree')
kde.fit(Xtrain[ytrain == i])
# evaluate only on the land: -9999 indicates ocean
Z = np.full(land_mask.shape[0], -9999, dtype='int')
Z[land_mask] = np.exp(kde.score_samples(xy))
Z = Z.reshape(X.shape)
# plot contours of the density
levels = np.linspace(0, Z.max(), 25)
plt.contourf(X, Y, Z, levels=levels, cmap=plt.cm.Reds)
if basemap:
print(" - plot coastlines using basemap")
m = Basemap(projection='cyl', llcrnrlat=Y.min(),
urcrnrlat=Y.max(), llcrnrlon=X.min(),
urcrnrlon=X.max(), resolution='c')
m.drawcoastlines()
m.drawcountries()
else:
print(" - plot coastlines from coverage")
plt.contour(X, Y, land_reference,
levels=[-9998], colors="k",
linestyles="solid")
plt.xticks([])
plt.yticks([])
plt.title(species_names[i])
plt.show()
###Output
- computing KDE in spherical coordinates
- plot coastlines from coverage
- computing KDE in spherical coordinates
- plot coastlines from coverage
###Markdown
One other useful application of kernel density estimation is to learn a non-parametric generative model of a dataset in order to efficiently draw new samples from this generative model. Here is an example of using this process to create a new set of hand-written digits, using a Gaussian kernel learned on a PCA projection of the data:
###Code
from sklearn.datasets import load_digits
from sklearn.neighbors import KernelDensity
from sklearn.decomposition import PCA
from sklearn.model_selection import GridSearchCV
# load the data
digits = load_digits()
# project the 64-dimensional data to a lower dimension
pca = PCA(n_components=15, whiten=False)
data = pca.fit_transform(digits.data)
# use grid search cross-validation to optimize the bandwidth
params = {'bandwidth': np.logspace(-1, 1, 20)}
grid = GridSearchCV(KernelDensity(), params)
grid.fit(data)
print("best bandwidth: {0}".format(grid.best_estimator_.bandwidth))
# use the best estimator to compute the kernel density estimate
kde = grid.best_estimator_
# sample 44 new points from the data
new_data = kde.sample(44, random_state=0)
new_data = pca.inverse_transform(new_data)
# turn data into a 4x11 grid
new_data = new_data.reshape((4, 11, -1))
real_data = digits.data[:44].reshape((4, 11, -1))
# plot real digits and resampled digits
fig, ax = plt.subplots(9, 11, subplot_kw=dict(xticks=[], yticks=[]), figsize=(13, 13))
for j in range(11):
ax[4, j].set_visible(False)
for i in range(4):
im = ax[i, j].imshow(real_data[i, j].reshape((8, 8)),
cmap=plt.cm.binary, interpolation='nearest')
im.set_clim(0, 16)
im = ax[i + 5, j].imshow(new_data[i, j].reshape((8, 8)),
cmap=plt.cm.binary, interpolation='nearest')
im.set_clim(0, 16)
ax[0, 5].set_title('Selection from the input data')
ax[5, 5].set_title('"New" digits drawn from the kernel density model')
plt.show()
###Output
best bandwidth: 3.79269019073225
|
courses/modsim2018/tasks/Tasks_ForLecture03/.ipynb_checkpoints/Python_iniciation-checkpoint.ipynb | ###Markdown
Abrir arquivo
###Code
#data = np.loadtxt('C:\\Users\\Raissa\\Documents\\UFABC\\2018\\MSMH\\Tasks2\\Pezzack.txt', skiprows=1)
t, pos, posNoisy, accel = np.loadtxt('./../Task2/Pezzack.txt', skiprows=6,unpack =" ")
###Output
_____no_output_____
###Markdown
Plotar Dados
###Code
#t = data[:,0]
#Position = data[:,1:2]
#AccelerationMeasured = data[:,3]
Position = pos
AccelerationMeasured = accel
i=10
print ("Position of t =", t[i], "s is ", Position[i]," m")
print ("\nAcceleration of t =", t[i], "s is ", AccelerationMeasured[i]," m/s^2")
# Time step
dt = t[1]-t[0]
# Calculate the derivate from the second derivate of Position vector
# First Derivative -->v = x'
VelocCalc = np.diff(Position, n=1, axis=0) / dt
print ("Speed Calculated at t =", t[i],"s is: ", VelocCalc[i], "m/s")
# Second Derivative --> a = x''
accelCalc = np.diff(VelocCalc, n=1, axis=0) / dt
print ("\nAcceleration Calculated at t =", t[i],"s is: ", accelCalc[i], "m/s^2")
len(t)
print("Initial size = ",np.size(t))
print("\nSpeed vector size = ",np.size(VelocCalc))
# Make the vector of time to have the same size of the calculated acceleration
new_t_size = np.size(accelCalc)
print("\nAcceleration vector size = ",new_t_size)
new_t = t[0:new_t_size]
print("\nNew t vector size = ", np.size(new_t))
# Make the vector of measured acceleration to have the same size of the calculated acceleration
AccelerationMeasured_newSize = AccelerationMeasured[:new_t_size]
# plot data
hfig, hax = plt.subplots(1, 1, sharex = True, squeeze=True, figsize=(9, 5))
hax.plot(new_t, accelCalc, label='Calculated', linewidth=2)
hax.plot(new_t, AccelerationMeasured_newSize, label='Measured', linewidth=2)
hax.legend(frameon=False)
hax.set_ylabel('Amplitude [m/s$^2$]')
hax.set_xlabel('Time [s]')
plt.tight_layout()
plt.show()
###Output
_____no_output_____ |
cell-painting/2.train/0.cell-painting-vaeLEVEL4_vanilla_leaveOut.ipynb | ###Markdown
Train a VAE on Cell Painting LINCS Data
###Code
import sys
import pathlib
import numpy as np
import pandas as pd
sys.path.insert(0, "../../scripts")
from utils import load_data
from pycytominer.cyto_utils import infer_cp_features
import matplotlib.pyplot as plt
from matplotlib.pyplot import figure
from sklearn.decomposition import PCA
from tensorflow import keras
from vae import VAE
from tensorflow.keras.models import Model, Sequential
import seaborn
import random as python_random
import tensorflow as tf
def remove_moa(df):
pipes = ['opioid receptor agonist|opioid receptor antagonist'
'glucocorticoid receptor agonist|immunosuppressant',
'AKT inhibitor|mTOR inhibitor',
'histamine receptor agonist|histamine receptor antagonist',
'antiviral|RNA synthesis inhibito']
moas = []
for pipe in pipes:
moas.append(pipe)
moas.append(pipe.split('|')[0])
moas.append(pipe.split('|')[1])
return df[~df.moa.isin(moas)]
data_splits = ["train", "test", "valid","complete"]
data_dict = load_data(data_splits)
# Prepare data for training
meta_features = infer_cp_features(data_dict["train"], metadata=True)
cp_features = infer_cp_features(data_dict["train"])
moa_df_train = pd.read_csv("../3.application/repurposing_info_external_moa_map_resolved.tsv",sep='\t').set_index('broad_sample').reindex(index=data_dict['train']['Metadata_broad_sample']).reset_index().drop('Metadata_broad_sample',axis = 1)
data_dict['train'] = pd.concat([moa_df_train,data_dict['train']], axis=1)
moa_df_valid = pd.read_csv("../3.application/repurposing_info_external_moa_map_resolved.tsv",sep='\t').set_index('broad_sample').reindex(index=data_dict['valid']['Metadata_broad_sample']).reset_index().drop('Metadata_broad_sample',axis = 1)
data_dict['valid'] = pd.concat([moa_df_valid,data_dict['valid']], axis=1)
data_dict['train'] = remove_moa(data_dict['train'])
data_dict['valid'] = remove_moa(data_dict['valid'])
train_features_df = data_dict["train"].reindex(cp_features, axis="columns")
train_meta_df = data_dict["train"].reindex(meta_features, axis="columns")
test_features_df = data_dict["test"].reindex(cp_features, axis="columns")
test_meta_df = data_dict["test"].reindex(meta_features, axis="columns")
valid_features_df = data_dict["valid"].reindex(cp_features, axis="columns")
valid_meta_df = data_dict["valid"].reindex(meta_features, axis="columns")
complete_features_df = data_dict["complete"].reindex(cp_features, axis="columns")
complete_meta_df = data_dict["complete"].reindex(meta_features, axis="columns")
print(train_features_df.shape)
train_features_df.head(3)
print(test_features_df.shape)
test_features_df.head(3)
print(complete_features_df.shape)
complete_features_df.head(3)
encoder_architecture = [250]
decoder_architecture = [250]
cp_vae = VAE(
input_dim=train_features_df.shape[1],
latent_dim=90,
batch_size=32,
encoder_batch_norm=True,
epochs=58,
learning_rate=0.0001,
encoder_architecture=encoder_architecture,
decoder_architecture=decoder_architecture,
beta=1,
verbose=True,
)
cp_vae.compile_vae()
cp_vae.train(x_train=train_features_df, x_test=valid_features_df)
cp_vae.vae
# Save training performance
history_df = pd.DataFrame(cp_vae.vae.history.history)
history_df
history_df.to_csv('training_data/level4_training_vanilla_leaveOut.csv')
plt.figure(figsize=(10, 5))
plt.plot(history_df["loss"], label="Training data")
plt.plot(history_df["val_loss"], label="Validation data")
plt.title("Loss for VAE training on Cell Painting Level 4 data")
plt.ylabel("MSE + KL Divergence")
plt.xlabel("No. Epoch")
plt.legend()
plt.show()
cp_vae.vae.evaluate(test_features_df)
reconstruction = pd.DataFrame(cp_vae.vae.predict(test_features_df), columns=cp_features)
(sum(sum((np.array(test_features_df) - np.array(reconstruction)) ** 2))) ** 0.5
#latent space heatmap
fig, ax = plt.subplots(figsize=(10, 10))
encoder = cp_vae.encoder_block["encoder"]
latent = np.array(encoder.predict(test_features_df)[2])
seaborn.heatmap(latent, ax=ax)
reconstruction = pd.DataFrame(cp_vae.vae.predict(test_features_df), columns=cp_features)
pca = PCA(n_components=2).fit(test_features_df)
pca_reconstructed_latent_df = pd.DataFrame(pca.transform(reconstruction))
pca_test_latent_df = pd.DataFrame(pca.transform(test_features_df))
figure(figsize=(10, 10), dpi=80)
plt.scatter(pca_test_latent_df[0],pca_test_latent_df[1], marker = ".", alpha = 0.5)
plt.scatter(pca_reconstructed_latent_df[0],pca_reconstructed_latent_df[1], marker = ".", alpha = 0.5)
decoder = cp_vae.decoder_block["decoder"]
pca_training = PCA(n_components=2).fit(train_features_df)
simulated_df = pd.DataFrame(np.random.normal(size=(40242, 90)), columns=np.arange(0,90))
reconstruction_of_simulated = decoder.predict(simulated_df)
pca_reconstruction_of_simulated = pd.DataFrame(pca_training.transform(reconstruction_of_simulated))
pca_train_latent_df = pd.DataFrame(pca_training.transform(train_features_df))
fig, (ax1,ax2) = plt.subplots(1, 2, figsize=(16,8), sharey = True, sharex = True)
ax1.scatter(pca_train_latent_df[0],pca_train_latent_df[1], marker = ".", alpha = 0.5)
ax2.scatter(pca_reconstruction_of_simulated[0],pca_reconstruction_of_simulated[1], marker = ".", alpha = 0.5)
from scipy.spatial.distance import directed_hausdorff
max(directed_hausdorff(reconstruction_of_simulated, train_features_df)[0],directed_hausdorff(train_features_df,reconstruction_of_simulated)[0])
#NOTE: IF YOU RUN THIS, YOU WILL NOT BE ABLE TO REPRODUCE THE EXACT RESULTS IN THE EXPERIMENT
latent_complete = np.array(encoder.predict(complete_features_df)[2])
latent_df = pd.DataFrame(latent)
latent_df.to_csv("../3.application/level4Latent_vanilla_leaveOut.csv")
#NOTE: IF YOU RUN THIS, YOU WILL NOT BE ABLE TO REPRODUCE THE EXACT RESULTS IN THE EXPERIMENT
decoder.save("models/level4Decoder_vanilla_leaveOut")
encoder.save("models/level4Encoder_vanilla_leaveOut")
###Output
INFO:tensorflow:Assets written to: level4Encoder_vanilla_leaveOut/assets
|
examples/text/question_answering_with_bert.ipynb | ###Markdown
Building an End-to-End Question-Answering System With BERTIn this notebook, we build a practical, end-to-end Question-Answering (QA) system with BERT in rougly 3 lines of code. We will treat a corpus of text documents as a knowledge base to which we can ask questions and retrieve exact answers using [BERT](https://arxiv.org/abs/1810.04805). This goes beyond simplistic keyword searches.For this example, we will use the [20 Newsgroup dataset](http://qwone.com/~jason/20Newsgroups/) as the text corpus. As a collection of newsgroup postings which contains an abundance of opinions and debates, the corpus is not ideal as a knowledgebase. It is better to use fact-based documents such as Wikipedia articles or even news articles. However, this dataset will suffice for this example.Let us begin by loading the dataset into an array using **scikit-learn** and importing *ktrain* modules.
###Code
# load 20newsgroups datset into an array
from sklearn.datasets import fetch_20newsgroups
remove = ('headers', 'footers', 'quotes')
newsgroups_train = fetch_20newsgroups(subset='train', remove=remove)
newsgroups_test = fetch_20newsgroups(subset='test', remove=remove)
docs = newsgroups_train.data + newsgroups_test.data
import ktrain
from ktrain import text
###Output
_____no_output_____
###Markdown
STEP 1: Index the DocumentsWe will first index the documents into a search engine that will be used to quickly retrieve documents that are likely to contain answers to a question. To do so, we must choose an index location, which must be a folder that does not already exist. Since the newsgroup postings are small and fit in memory, we wil set `commit_every` to a large value to speed up the indexing process. This means results will not be written until the end. If you experience issues, you can lower this value.
###Code
INDEXDIR = '/tmp/myindex'
text.SimpleQA.initialize_index(INDEXDIR)
text.SimpleQA.index_from_list(docs, INDEXDIR, commit_every=len(docs))
###Output
_____no_output_____
###Markdown
For documents sets that are too large to be loaded into a Python list, you can use `SimpleQA.index_from_folder`, which will crawl a folder and index all plain text documents.The above steps need to only be performed once. Once an index is already created, you can skip this step and proceed directly to **STEP 2** to begin using your system. STEP 2: Create a QA instanceNext, we create a QA instance. This step will automatically download the BERT SQUAD model if it does not already exist on your system.
###Code
qa = text.SimpleQA(INDEXDIR)
###Output
_____no_output_____
###Markdown
That's it! In roughly **3 lines of code**, we have built an end-to-end QA system that can now be used to generate answers to questions. Let's ask our system some questions. STEP 3: Ask QuestionsWe will invoke the `ask` method to issue questions to the text corpus we indexed and retrieve answers. We will also use the `qa.display` method to nicely display the top 5 results in this Jupyter notebook. The answers are inferred using a BERT model trained on the SQUAD dataset. Since the model is combing through paragraphs and sentences to find an answer, it may take a minute or two to return results.Note also that the 20 Newsgroup Dataset covers events in the early to mid 1990s, so references to recent events will not exist. Space Question
###Code
answers = qa.ask('When did the Cassini probe launch?')
qa.display_answers(answers[:5])
###Output
_____no_output_____
###Markdown
As you can see, the top candidate answer indicates that the Cassini space probe was launched in October of 1997, which appears to be correct. The correct answer will not always be the top answer, but it is in this case. Note that, since we used `index_from_list` to index documents, the last column shows the list index associated with the newsgroup posting containing the answer, which can be used to peruse the entire document containing the answer. If using `index_from_folder` to index documents, the last column will show the relative path and filename of the document.
###Code
print(docs[59])
###Output
Archive-name: space/new_probes
Last-modified: $Date: 93/04/01 14:39:17 $
UPCOMING PLANETARY PROBES - MISSIONS AND SCHEDULES
Information on upcoming or currently active missions not mentioned below
would be welcome. Sources: NASA fact sheets, Cassini Mission Design
team, ISAS/NASDA launch schedules, press kits.
ASUKA (ASTRO-D) - ISAS (Japan) X-ray astronomy satellite, launched into
Earth orbit on 2/20/93. Equipped with large-area wide-wavelength (1-20
Angstrom) X-ray telescope, X-ray CCD cameras, and imaging gas
scintillation proportional counters.
CASSINI - Saturn orbiter and Titan atmosphere probe. Cassini is a joint
NASA/ESA project designed to accomplish an exploration of the Saturnian
system with its Cassini Saturn Orbiter and Huygens Titan Probe. Cassini
is scheduled for launch aboard a Titan IV/Centaur in October of 1997.
After gravity assists of Venus, Earth and Jupiter in a VVEJGA
trajectory, the spacecraft will arrive at Saturn in June of 2004. Upon
arrival, the Cassini spacecraft performs several maneuvers to achieve an
orbit around Saturn. Near the end of this initial orbit, the Huygens
Probe separates from the Orbiter and descends through the atmosphere of
Titan. The Orbiter relays the Probe data to Earth for about 3 hours
while the Probe enters and traverses the cloudy atmosphere to the
surface. After the completion of the Probe mission, the Orbiter
continues touring the Saturnian system for three and a half years. Titan
synchronous orbit trajectories will allow about 35 flybys of Titan and
targeted flybys of Iapetus, Dione and Enceladus. The objectives of the
mission are threefold: conduct detailed studies of Saturn's atmosphere,
rings and magnetosphere; conduct close-up studies of Saturn's
satellites, and characterize Titan's atmosphere and surface.
One of the most intriguing aspects of Titan is the possibility that its
surface may be covered in part with lakes of liquid hydrocarbons that
result from photochemical processes in its upper atmosphere. These
hydrocarbons condense to form a global smog layer and eventually rain
down onto the surface. The Cassini orbiter will use onboard radar to
peer through Titan's clouds and determine if there is liquid on the
surface. Experiments aboard both the orbiter and the entry probe will
investigate the chemical processes that produce this unique atmosphere.
The Cassini mission is named for Jean Dominique Cassini (1625-1712), the
first director of the Paris Observatory, who discovered several of
Saturn's satellites and the major division in its rings. The Titan
atmospheric entry probe is named for the Dutch physicist Christiaan
Huygens (1629-1695), who discovered Titan and first described the true
nature of Saturn's rings.
Key Scheduled Dates for the Cassini Mission (VVEJGA Trajectory)
-------------------------------------------------------------
10/06/97 - Titan IV/Centaur Launch
04/21/98 - Venus 1 Gravity Assist
06/20/99 - Venus 2 Gravity Assist
08/16/99 - Earth Gravity Assist
12/30/00 - Jupiter Gravity Assist
06/25/04 - Saturn Arrival
01/09/05 - Titan Probe Release
01/30/05 - Titan Probe Entry
06/25/08 - End of Primary Mission
(Schedule last updated 7/22/92)
GALILEO - Jupiter orbiter and atmosphere probe, in transit. Has returned
the first resolved images of an asteroid, Gaspra, while in transit to
Jupiter. Efforts to unfurl the stuck High-Gain Antenna (HGA) have
essentially been abandoned. JPL has developed a backup plan using data
compression (JPEG-like for images, lossless compression for data from
the other instruments) which should allow the mission to achieve
approximately 70% of its original objectives.
Galileo Schedule
----------------
10/18/89 - Launch from Space Shuttle
02/09/90 - Venus Flyby
10/**/90 - Venus Data Playback
12/08/90 - 1st Earth Flyby
05/01/91 - High Gain Antenna Unfurled
07/91 - 06/92 - 1st Asteroid Belt Passage
10/29/91 - Asteroid Gaspra Flyby
12/08/92 - 2nd Earth Flyby
05/93 - 11/93 - 2nd Asteroid Belt Passage
08/28/93 - Asteroid Ida Flyby
07/02/95 - Probe Separation
07/09/95 - Orbiter Deflection Maneuver
12/95 - 10/97 - Orbital Tour of Jovian Moons
12/07/95 - Jupiter/Io Encounter
07/18/96 - Ganymede
09/28/96 - Ganymede
12/12/96 - Callisto
01/23/97 - Europa
02/28/97 - Ganymede
04/22/97 - Europa
05/31/97 - Europa
10/05/97 - Jupiter Magnetotail Exploration
HITEN - Japanese (ISAS) lunar probe launched 1/24/90. Has made
multiple lunar flybys. Released Hagoromo, a smaller satellite,
into lunar orbit. This mission made Japan the third nation to
orbit a satellite around the Moon.
MAGELLAN - Venus radar mapping mission. Has mapped almost the entire
surface at high resolution. Currently (4/93) collecting a global gravity
map.
MARS OBSERVER - Mars orbiter including 1.5 m/pixel resolution camera.
Launched 9/25/92 on a Titan III/TOS booster. MO is currently (4/93) in
transit to Mars, arriving on 8/24/93. Operations will start 11/93 for
one martian year (687 days).
TOPEX/Poseidon - Joint US/French Earth observing satellite, launched
8/10/92 on an Ariane 4 booster. The primary objective of the
TOPEX/POSEIDON project is to make precise and accurate global
observations of the sea level for several years, substantially
increasing understanding of global ocean dynamics. The satellite also
will increase understanding of how heat is transported in the ocean.
ULYSSES- European Space Agency probe to study the Sun from an orbit over
its poles. Launched in late 1990, it carries particles-and-fields
experiments (such as magnetometer, ion and electron collectors for
various energy ranges, plasma wave radio receivers, etc.) but no camera.
Since no human-built rocket is hefty enough to send Ulysses far out of
the ecliptic plane, it went to Jupiter instead, and stole energy from
that planet by sliding over Jupiter's north pole in a gravity-assist
manuver in February 1992. This bent its path into a solar orbit tilted
about 85 degrees to the ecliptic. It will pass over the Sun's south pole
in the summer of 1993. Its aphelion is 5.2 AU, and, surprisingly, its
perihelion is about 1.5 AU-- that's right, a solar-studies spacecraft
that's always further from the Sun than the Earth is!
While in Jupiter's neigborhood, Ulysses studied the magnetic and
radiation environment. For a short summary of these results, see
*Science*, V. 257, p. 1487-1489 (11 September 1992). For gory technical
detail, see the many articles in the same issue.
OTHER SPACE SCIENCE MISSIONS (note: this is based on a posting by Ron
Baalke in 11/89, with ISAS/NASDA information contributed by Yoshiro
Yamada ([email protected]). I'm attempting to track changes based
on updated shuttle manifests; corrections and updates are welcome.
1993 Missions
o ALEXIS [spring, Pegasus]
ALEXIS (Array of Low-Energy X-ray Imaging Sensors) is to perform
a wide-field sky survey in the "soft" (low-energy) X-ray
spectrum. It will scan the entire sky every six months to search
for variations in soft-X-ray emission from sources such as white
dwarfs, cataclysmic variable stars and flare stars. It will also
search nearby space for such exotic objects as isolated neutron
stars and gamma-ray bursters. ALEXIS is a project of Los Alamos
National Laboratory and is primarily a technology development
mission that uses astrophysical sources to demonstrate the
technology. Contact project investigator Jeffrey J Bloch
([email protected]) for more information.
o Wind [Aug, Delta II rocket]
Satellite to measure solar wind input to magnetosphere.
o Space Radar Lab [Sep, STS-60 SRL-01]
Gather radar images of Earth's surface.
o Total Ozone Mapping Spectrometer [Dec, Pegasus rocket]
Study of Stratospheric ozone.
o SFU (Space Flyer Unit) [ISAS]
Conducting space experiments and observations and this can be
recovered after it conducts the various scientific and
engineering experiments. SFU is to be launched by ISAS and
retrieved by the U.S. Space Shuttle on STS-68 in 1994.
1994
o Polar Auroral Plasma Physics [May, Delta II rocket]
June, measure solar wind and ions and gases surrounding the
Earth.
o IML-2 (STS) [NASDA, Jul 1994 IML-02]
International Microgravity Laboratory.
o ADEOS [NASDA]
Advanced Earth Observing Satellite.
o MUSES-B (Mu Space Engineering Satellite-B) [ISAS]
Conducting research on the precise mechanism of space structure
and in-space astronomical observations of electromagnetic waves.
1995
LUNAR-A [ISAS]
Elucidating the crust structure and thermal construction of the
moon's interior.
Proposed Missions:
o Advanced X-ray Astronomy Facility (AXAF)
Possible launch from shuttle in 1995, AXAF is a space
observatory with a high resolution telescope. It would orbit for
15 years and study the mysteries and fate of the universe.
o Earth Observing System (EOS)
Possible launch in 1997, 1 of 6 US orbiting space platforms to
provide long-term data (15 years) of Earth systems science
including planetary evolution.
o Mercury Observer
Possible 1997 launch.
o Lunar Observer
Possible 1997 launch, would be sent into a long-term lunar
orbit. The Observer, from 60 miles above the moon's poles, would
survey characteristics to provide a global context for the
results from the Apollo program.
o Space Infrared Telescope Facility
Possible launch by shuttle in 1999, this is the 4th element of
the Great Observatories program. A free-flying observatory with
a lifetime of 5 to 10 years, it would observe new comets and
other primitive bodies in the outer solar system, study cosmic
birth formation of galaxies, stars and planets and distant
infrared-emitting galaxies
o Mars Rover Sample Return (MRSR)
Robotics rover would return samples of Mars' atmosphere and
surface to Earch for analysis. Possible launch dates: 1996 for
imaging orbiter, 2001 for rover.
o Fire and Ice
Possible launch in 2001, will use a gravity assist flyby of
Earth in 2003, and use a final gravity assist from Jupiter in
2005, where the probe will split into its Fire and Ice
components: The Fire probe will journey into the Sun, taking
measurements of our star's upper atmosphere until it is
vaporized by the intense heat. The Ice probe will head out
towards Pluto, reaching the tiny world for study by 2016.
###Markdown
The 20 Newsgroup dataset contains lots of posts discussing and debating Christianity, as well. Let's ask a question on this subject. Religious Question
###Code
answers = qa.ask('Who was Jesus Christ?')
qa.display_answers(answers[:5])
###Output
_____no_output_____
###Markdown
Here, we see different views on who Jesus was as debated and discussed in this document set.Finally, the 20 Newsgroup dataset also contains many groups about computing hardward and software. Let's ask a technical support question. Technical Question
###Code
answers = qa.ask('What causes computer images to be too dark?')
qa.display_answers(answers[:5])
###Output
_____no_output_____
###Markdown
Building an End-to-End Question-Answering System With BERTIn this notebook, we build a practical, end-to-end Question-Answering (QA) system with BERT in rougly 3 lines of code. We will treat a corpus of text documents as a knowledge base to which we can ask questions and retrieve exact answers using [BERT](https://arxiv.org/abs/1810.04805). This goes beyond simplistic keyword searches.For this example, we will use the [20 Newsgroup dataset](http://qwone.com/~jason/20Newsgroups/) as the text corpus. As a collection of newsgroup postings which contains an abundance of opinions and debates, the corpus is not ideal as a knowledgebase. It is better to use fact-based documents such as Wikipedia articles or even news articles. However, this dataset will suffice for this example.Let us begin by loading the dataset into an array using **scikit-learn** and importing *ktrain* modules.
###Code
# load 20newsgroups datset into an array
from sklearn.datasets import fetch_20newsgroups
remove = ('headers', 'footers', 'quotes')
newsgroups_train = fetch_20newsgroups(subset='train', remove=remove)
newsgroups_test = fetch_20newsgroups(subset='test', remove=remove)
docs = newsgroups_train.data + newsgroups_test.data
import ktrain
from ktrain import text
###Output
_____no_output_____
###Markdown
STEP 1: Index the DocumentsWe will first index the documents into a search engine that will be used to quickly retrieve documents that are likely to contain answers to a question. To do so, we must choose an index location, which must be a folder that does not already exist. Since the newsgroup postings are small and fit in memory, we wil set `commit_every` to a large value to speed up the indexing process. This means results will not be written until the end. If you experience issues, you can lower this value.
###Code
INDEXDIR = '/tmp/myindex'
text.SimpleQA.initialize_index(INDEXDIR)
text.SimpleQA.index_from_list(docs, INDEXDIR, commit_every=len(docs),
multisegment=True, procs=4, # these args speed up indexing
breakup_docs=True # this slows indexing but speeds up answer retrieval
)
###Output
_____no_output_____
###Markdown
For documents sets that are too large to be loaded into a Python list, you can use `SimpleQA.index_from_folder`, which will crawl a folder and index all plain text documents (e.g.,, `.txt` files). Speeding Up IndexingBy default, `index_from_list` and `index_from_folder` use a single processor (`procs=1`) with each processor using a maximum of 256MB of memory (`limitmb=256`) and merging results into a single segment (`multisegment=False`). These values can be changed to speedup indexing as arguments to `index_from_list` or `index_from_folder`. See the [whoosh documentation](https://whoosh.readthedocs.io/en/latest/batch.html) for more information on these parameters and how to use them to speedup indexing. In this case, we've used `multisegment=True` and `procs=4`. Speeding Up Answer RetrievalNote that larger documents will cause inferences in STEP 3 (see below) to be very slow. If your dataset consists of larger documents (e.g., long articles), we recommend breaking them up into pages (e.g., splitting the original PDF using something like `pdfseparate`) or splitting them into paragraphs (paragraphs are probably preferrable). The latter can be done with *ktrain* using:```pythonktrain.text.textutils.paragraph_tokenize(document, join_sentences=True)```If you supply `breakup_docs=True` in the cell above, this will be done automatically. Note that `breakup_docs=True` will slightly **slow indexing** (i.e., STEP 1), but **speed up answer retrieval** (i.e., STEP 3 below). A second way to speed up answer-retrieval is to increase `batch_size` in STEP 3 if using a GPU, which will be discussed later.The above steps need to only be performed once. Once an index is already created, you can skip this step and proceed directly to **STEP 2** to begin using your system. STEP 2: Create a QA instanceNext, we create a QA instance. This step will automatically download the BERT SQuAD model if it does not already exist on your system.
###Code
qa = text.SimpleQA(INDEXDIR)
###Output
_____no_output_____
###Markdown
That's it! In roughly **3 lines of code**, we have built an end-to-end QA system that can now be used to generate answers to questions. Let's ask our system some questions. STEP 3: Ask QuestionsWe will invoke the `ask` method to issue questions to the text corpus we indexed and retrieve answers. We will also use the `qa.display` method to nicely display the top 5 results in this Jupyter notebook. The answers are inferred using a BERT model fine-tuned on the SQuAD dataset. The model will comb through paragraphs and sentences to find candidate answers. By default, `ask` currently uses a `batch_size` of 8, but, if necessary, you can experiment with lowering it by setting the `batch_size` parameter. On a CPU, for instance, you may want to try `batch_size=1`.Note also that the 20 Newsgroup Dataset covers events in the early to mid 1990s, so references to recent events will not exist. Space Question
###Code
answers = qa.ask('When did the Cassini probe launch?')
qa.display_answers(answers[:5])
###Output
_____no_output_____
###Markdown
As you can see, the top candidate answer indicates that the Cassini space probe was launched in October of 1997, which appears to be correct. The correct answer will not always be the top answer, but it is in this case. Note that, since we used `index_from_list` to index documents, the last column (i.e., **Document Reference**) shows the list index associated with the newsgroup posting containing the answer, which can be used to peruse the entire document containing the answer. If using `index_from_folder` to index documents, the last column will show the relative path and filename of the document. The **Document Reference** values can be customized by supplying a `references` parameter to `index_from_list`.To see the text of the document that contains the top answer, uncomment and execute the following line (it's a comparatively long post).
###Code
#print(docs[59])
###Output
_____no_output_____
###Markdown
The 20 Newsgroup dataset contains lots of posts discussing and debating religions like Christianity and Islam, as well. Let's ask a question on this subject. Religious Question
###Code
answers = qa.ask('Who was Muhammad?')
qa.display_answers(answers[:5])
###Output
_____no_output_____
###Markdown
Here, we see different views on who Muhammad, the founder of Islam, as debated and discussed in this document set. Finally, the 20 Newsgroup dataset also contains many groups about computing hardware and software. Let's ask a technical support question. Technical Question
###Code
answers = qa.ask('What causes computer images to be too dark?')
qa.display_answers(answers[:5])
###Output
_____no_output_____
###Markdown
Building an End-to-End Question-Answering System With BERTIn this notebook, we build a practical, end-to-end Question-Answering (QA) system with BERT in rougly 3 lines of code. We will treat a corpus of text documents as a knowledge base to which we can ask questions and retrieve exact answers using [BERT](https://arxiv.org/abs/1810.04805). This goes beyond simplistic keyword searches.For this example, we will use the [20 Newsgroup dataset](http://qwone.com/~jason/20Newsgroups/) as the text corpus. As a collection of newsgroup postings which contains an abundance of opinions and debates, the corpus is not ideal as a knowledgebase. It is better to use fact-based documents such as Wikipedia articles or even news articles. However, this dataset will suffice for this example.Let us begin by loading the dataset into an array using **scikit-learn** and importing *ktrain* modules.
###Code
# load 20newsgroups datset into an array
from sklearn.datasets import fetch_20newsgroups
remove = ('headers', 'footers', 'quotes')
newsgroups_train = fetch_20newsgroups(subset='train', remove=remove)
newsgroups_test = fetch_20newsgroups(subset='test', remove=remove)
docs = newsgroups_train.data + newsgroups_test.data
import ktrain
from ktrain.text.qa import SimpleQA
###Output
_____no_output_____
###Markdown
STEP 1: Index the DocumentsWe will first index the documents into a search engine that will be used to quickly retrieve documents that are likely to contain answers to a question. To do so, we must choose an index location, which must be a folder that does not already exist. Since the newsgroup postings are small and fit in memory, we wil set `commit_every` to a large value to speed up the indexing process. This means results will not be written until the end. If you experience issues, you can lower this value.
###Code
INDEXDIR = '/tmp/myindex'
SimpleQA.initialize_index(INDEXDIR)
SimpleQA.index_from_list(docs, INDEXDIR, commit_every=len(docs),
multisegment=True, procs=4, # these args speed up indexing
breakup_docs=True # this slows indexing but speeds up answer retrieval
)
###Output
_____no_output_____
###Markdown
For documents sets that are too large to be loaded into a Python list, you can use `SimpleQA.index_from_folder`, which will crawl a folder and index all plain text documents (e.g.,, `.txt` files) by default. If your documents are in formats like `.pdf`, `.docx`, or `.pptx`, you can supply the `use_text_extraction=True` argument to `index_from_folder`, which will use the [textract](https://textract.readthedocs.io/en/stable/) package to extract text from different file types and index this text into the search engine for answer rerieval. You can also manually convert them to `.txt` files with the `ktrain.text.textutils.extract_copy` or tools like [Apache Tika](https://tika.apache.org/) or [textract](https://textract.readthedocs.io/en/stable/). Speeding Up IndexingBy default, `index_from_list` and `index_from_folder` use a single processor (`procs=1`) with each processor using a maximum of 256MB of memory (`limitmb=256`) and merging results into a single segment (`multisegment=False`). These values can be changed to speedup indexing as arguments to `index_from_list` or `index_from_folder`. See the [whoosh documentation](https://whoosh.readthedocs.io/en/latest/batch.html) for more information on these parameters and how to use them to speedup indexing. In this case, we've used `multisegment=True` and `procs=4`. Speeding Up Answer RetrievalNote that larger documents will cause inferences in STEP 3 (see below) to be very slow. If your dataset consists of larger documents (e.g., long articles), we recommend breaking them up into pages (e.g., splitting the original PDF using something like `pdfseparate`) or splitting them into paragraphs (paragraphs are probably preferrable). The latter can be done with *ktrain* using:```pythonktrain.text.textutils.paragraph_tokenize(document, join_sentences=True)```If you supply `breakup_docs=True` in the cell above, this will be done automatically. Note that `breakup_docs=True` will slightly **slow indexing** (i.e., STEP 1), but **speed up answer retrieval** (i.e., STEP 3 below). A second way to speed up answer-retrieval is to increase `batch_size` in STEP 3 if using a GPU, which will be discussed later.The above steps need to only be performed once. Once an index is already created, you can skip this step and proceed directly to **STEP 2** to begin using your system. STEP 2: Create a QA instanceNext, we create a QA instance. (Note that, by default, `SimpleQA` uses TensorFlow. To use PyTorch, supply `framework='pt'` as a parameter.) This step will automatically download the BERT SQuAD model if it does not already exist on your system.
###Code
qa = SimpleQA(INDEXDIR)
###Output
_____no_output_____
###Markdown
That's it! In roughly **3 lines of code**, we have built an end-to-end QA system that can now be used to generate answers to questions. Next, let's ask our system some questions. STEP 3: Ask QuestionsWe will invoke the `ask` method to issue questions to the text corpus we indexed and retrieve answers. We will also use the `qa.display` method to nicely display the top 5 results in this Jupyter notebook. The answers are inferred using a BERT model fine-tuned on the SQuAD dataset. The model will comb through paragraphs and sentences to find candidate answers. By default, `ask` currently uses a `batch_size` of 8, but, if necessary, you can experiment with lowering it by setting the `batch_size` parameter. On a CPU, for instance, you may want to try `batch_size=1`.Note also that the 20 Newsgroup Dataset covers events in the early to mid 1990s, so references to recent events will not exist. Space Question
###Code
answers = qa.ask('When did the Cassini probe launch?')
qa.display_answers(answers[:5])
###Output
_____no_output_____
###Markdown
As you can see, the top candidate answer indicates that the Cassini space probe was launched in October of 1997, which appears to be correct. The correct answer will not always be the top answer, but it is in this case. Note that, since we used `index_from_list` to index documents, the last column (i.e., **Document Reference**) shows the list index associated with the newsgroup posting containing the answer, which can be used to peruse the entire document containing the answer. If using `index_from_folder` to index documents, the last column will show the relative path and filename of the document. The **Document Reference** values can be customized by supplying a `references` parameter to `index_from_list`.To see the text of the document that contains the top answer, uncomment and execute the following line (it's a comparatively long post).
###Code
#print(docs[59])
###Output
_____no_output_____
###Markdown
The 20 Newsgroup dataset contains lots of posts discussing and debating religions like Christianity and Islam, as well. Let's ask a question on this subject. Religious Question
###Code
answers = qa.ask('Who was Muhammad?')
qa.display_answers(answers[:5])
###Output
_____no_output_____
###Markdown
Here, we see different views on who Muhammad, the founder of Islam, as debated and discussed in this document set. Finally, the 20 Newsgroup dataset also contains many groups about computing hardware and software. Let's ask a technical support question. Technical Question
###Code
answers = qa.ask('What causes computer images to be too dark?')
qa.display_answers(answers[:5])
###Output
_____no_output_____
###Markdown
Building an End-to-End Question-Answering System With BERTIn this notebook, we build a practical, end-to-end Question-Answering (QA) system with BERT in rougly 3 lines of code. We will treat a corpus of text documents as a knowledge base to which we can ask questions and retrieve exact answers using [BERT](https://arxiv.org/abs/1810.04805). This goes beyond simplistic keyword searches.For this example, we will use the [20 Newsgroup dataset](http://qwone.com/~jason/20Newsgroups/) as the text corpus. As a collection of newsgroup postings which contains an abundance of opinions and debates, the corpus is not ideal as a knowledgebase. It is better to use fact-based documents such as Wikipedia articles or even news articles. However, this dataset will suffice for this example.Let us begin by loading the dataset into an array using **scikit-learn** and importing *ktrain* modules.
###Code
# load 20newsgroups datset into an array
from sklearn.datasets import fetch_20newsgroups
remove = ('headers', 'footers', 'quotes')
newsgroups_train = fetch_20newsgroups(subset='train', remove=remove)
newsgroups_test = fetch_20newsgroups(subset='test', remove=remove)
docs = newsgroups_train.data + newsgroups_test.data
import ktrain
from ktrain.text.qa import SimpleQA
###Output
_____no_output_____
###Markdown
STEP 1: Index the DocumentsWe will first index the documents into a search engine that will be used to quickly retrieve documents that are likely to contain answers to a question. To do so, we must choose an index location, which must be a folder that does not already exist. Since the newsgroup postings are small and fit in memory, we wil set `commit_every` to a large value to speed up the indexing process. This means results will not be written until the end. If you experience issues, you can lower this value.
###Code
INDEXDIR = '/tmp/myindex'
SimpleQA.initialize_index(INDEXDIR)
SimpleQA.index_from_list(docs, INDEXDIR, commit_every=len(docs),
multisegment=True, procs=4, # these args speed up indexing
breakup_docs=True # this slows indexing but speeds up answer retrieval
)
###Output
_____no_output_____
###Markdown
For documents sets that are too large to be loaded into a Python list, you can use `SimpleQA.index_from_folder`, which will crawl a folder and index all plain text documents (e.g.,, `.txt` files) by default. If your documents are in formats like `.pdf`, `.docx`, or `.pptx`, you can supply the `use_text_extraction=True` argument to `index_from_folder`, which will use the [textract](https://textract.readthedocs.io/en/stable/) package to extract text from different file types and index this text into the search engine for answer rerieval. You can also manually convert them to `.txt` files with the `ktrain.text.textutils.extract_copy` or tools like [Apache Tika](https://tika.apache.org/) or [textract](https://textract.readthedocs.io/en/stable/). Speeding Up IndexingBy default, `index_from_list` and `index_from_folder` use a single processor (`procs=1`) with each processor using a maximum of 256MB of memory (`limitmb=256`) and merging results into a single segment (`multisegment=False`). These values can be changed to speedup indexing as arguments to `index_from_list` or `index_from_folder`. See the [whoosh documentation](https://whoosh.readthedocs.io/en/latest/batch.html) for more information on these parameters and how to use them to speedup indexing. In this case, we've used `multisegment=True` and `procs=4`. Speeding Up Answer RetrievalNote that larger documents will cause inferences in STEP 3 (see below) to be very slow. If your dataset consists of larger documents (e.g., long articles), we recommend breaking them up into pages (e.g., splitting the original PDF using something like `pdfseparate`) or splitting them into paragraphs (paragraphs are probably preferrable). The latter can be done with *ktrain* using:```pythonktrain.text.textutils.paragraph_tokenize(document, join_sentences=True)```If you supply `breakup_docs=True` in the cell above, this will be done automatically. Note that `breakup_docs=True` will slightly **slow indexing** (i.e., STEP 1), but **speed up answer retrieval** (i.e., STEP 3 below). A second way to speed up answer-retrieval is to increase `batch_size` in STEP 3 if using a GPU, which will be discussed later.The above steps need to only be performed once. Once an index is already created, you can skip this step and proceed directly to **STEP 2** to begin using your system. STEP 2: Create a QA instanceNext, we create a QA instance. This step will automatically download the BERT SQuAD model if it does not already exist on your system.
###Code
qa = SimpleQA(INDEXDIR)
###Output
_____no_output_____
###Markdown
That's it! In roughly **3 lines of code**, we have built an end-to-end QA system that can now be used to generate answers to questions. Let's ask our system some questions. STEP 3: Ask QuestionsWe will invoke the `ask` method to issue questions to the text corpus we indexed and retrieve answers. We will also use the `qa.display` method to nicely display the top 5 results in this Jupyter notebook. The answers are inferred using a BERT model fine-tuned on the SQuAD dataset. The model will comb through paragraphs and sentences to find candidate answers. By default, `ask` currently uses a `batch_size` of 8, but, if necessary, you can experiment with lowering it by setting the `batch_size` parameter. On a CPU, for instance, you may want to try `batch_size=1`.Note also that the 20 Newsgroup Dataset covers events in the early to mid 1990s, so references to recent events will not exist. Space Question
###Code
answers = qa.ask('When did the Cassini probe launch?')
qa.display_answers(answers[:5])
###Output
_____no_output_____
###Markdown
As you can see, the top candidate answer indicates that the Cassini space probe was launched in October of 1997, which appears to be correct. The correct answer will not always be the top answer, but it is in this case. Note that, since we used `index_from_list` to index documents, the last column (i.e., **Document Reference**) shows the list index associated with the newsgroup posting containing the answer, which can be used to peruse the entire document containing the answer. If using `index_from_folder` to index documents, the last column will show the relative path and filename of the document. The **Document Reference** values can be customized by supplying a `references` parameter to `index_from_list`.To see the text of the document that contains the top answer, uncomment and execute the following line (it's a comparatively long post).
###Code
#print(docs[59])
###Output
_____no_output_____
###Markdown
The 20 Newsgroup dataset contains lots of posts discussing and debating religions like Christianity and Islam, as well. Let's ask a question on this subject. Religious Question
###Code
answers = qa.ask('Who was Muhammad?')
qa.display_answers(answers[:5])
###Output
_____no_output_____
###Markdown
Here, we see different views on who Muhammad, the founder of Islam, as debated and discussed in this document set. Finally, the 20 Newsgroup dataset also contains many groups about computing hardware and software. Let's ask a technical support question. Technical Question
###Code
answers = qa.ask('What causes computer images to be too dark?')
qa.display_answers(answers[:5])
###Output
_____no_output_____
###Markdown
Building an End-to-End Question-Answering System With BERTIn this notebook, we build a practical, end-to-end Question-Answering (QA) system with BERT in rougly 3 lines of code. We will treat a corpus of text documents as a knowledge base to which we can ask questions and retrieve exact answers using [BERT](https://arxiv.org/abs/1810.04805). This goes beyond simplistic keyword searches.For this example, we will use the [20 Newsgroup dataset](http://qwone.com/~jason/20Newsgroups/) as the text corpus. As a collection of newsgroup postings which contains an abundance of opinions and debates, the corpus is not ideal as a knowledgebase. It is better to use fact-based documents such as Wikipedia articles or even news articles. However, this dataset will suffice for this example.Let us begin by loading the dataset into an array using **scikit-learn** and importing *ktrain* modules.
###Code
# load 20newsgroups datset into an array
from sklearn.datasets import fetch_20newsgroups
remove = ('headers', 'footers', 'quotes')
newsgroups_train = fetch_20newsgroups(subset='train', remove=remove)
newsgroups_test = fetch_20newsgroups(subset='test', remove=remove)
docs = newsgroups_train.data + newsgroups_test.data
import ktrain
from ktrain import text
###Output
_____no_output_____
###Markdown
STEP 1: Index the DocumentsWe will first index the documents into a search engine that will be used to quickly retrieve documents that are likely to contain answers to a question. To do so, we must choose an index location, which must be a folder that does not already exist. Since the newsgroup postings are small and fit in memory, we wil set `commit_every` to a large value to speed up the indexing process. This means results will not be written until the end. If you experience issues, you can lower this value.
###Code
INDEXDIR = '/tmp/myindex'
text.SimpleQA.initialize_index(INDEXDIR)
text.SimpleQA.index_from_list(docs, INDEXDIR, commit_every=len(docs),
multisegment=True, procs=4, # these args speed up indexing
breakup_docs=True # this slows indexing but speeds up answer retrieval
)
###Output
_____no_output_____
###Markdown
For documents sets that are too large to be loaded into a Python list, you can use `SimpleQA.index_from_folder`, which will crawl a folder and index all plain text documents (e.g.,, `.txt` files) by default. If your documents are in formats like `.pdf`, `.docx`, or `.pptx`, you can supply the `use_text_extraction=True` argument to `index_from_folder`, which will use the [textract](https://textract.readthedocs.io/en/stable/) package to extract text from different file types and index this text into the search engine for answer rerieval. You can also manually convert them to `.txt` files with the `ktrain.text.textutils.extract_copy` or tools like [Apache Tika](https://tika.apache.org/) or [textract](https://textract.readthedocs.io/en/stable/). Speeding Up IndexingBy default, `index_from_list` and `index_from_folder` use a single processor (`procs=1`) with each processor using a maximum of 256MB of memory (`limitmb=256`) and merging results into a single segment (`multisegment=False`). These values can be changed to speedup indexing as arguments to `index_from_list` or `index_from_folder`. See the [whoosh documentation](https://whoosh.readthedocs.io/en/latest/batch.html) for more information on these parameters and how to use them to speedup indexing. In this case, we've used `multisegment=True` and `procs=4`. Speeding Up Answer RetrievalNote that larger documents will cause inferences in STEP 3 (see below) to be very slow. If your dataset consists of larger documents (e.g., long articles), we recommend breaking them up into pages (e.g., splitting the original PDF using something like `pdfseparate`) or splitting them into paragraphs (paragraphs are probably preferrable). The latter can be done with *ktrain* using:```pythonktrain.text.textutils.paragraph_tokenize(document, join_sentences=True)```If you supply `breakup_docs=True` in the cell above, this will be done automatically. Note that `breakup_docs=True` will slightly **slow indexing** (i.e., STEP 1), but **speed up answer retrieval** (i.e., STEP 3 below). A second way to speed up answer-retrieval is to increase `batch_size` in STEP 3 if using a GPU, which will be discussed later.The above steps need to only be performed once. Once an index is already created, you can skip this step and proceed directly to **STEP 2** to begin using your system. STEP 2: Create a QA instanceNext, we create a QA instance. This step will automatically download the BERT SQuAD model if it does not already exist on your system.
###Code
qa = text.SimpleQA(INDEXDIR)
###Output
_____no_output_____
###Markdown
That's it! In roughly **3 lines of code**, we have built an end-to-end QA system that can now be used to generate answers to questions. Let's ask our system some questions. STEP 3: Ask QuestionsWe will invoke the `ask` method to issue questions to the text corpus we indexed and retrieve answers. We will also use the `qa.display` method to nicely display the top 5 results in this Jupyter notebook. The answers are inferred using a BERT model fine-tuned on the SQuAD dataset. The model will comb through paragraphs and sentences to find candidate answers. By default, `ask` currently uses a `batch_size` of 8, but, if necessary, you can experiment with lowering it by setting the `batch_size` parameter. On a CPU, for instance, you may want to try `batch_size=1`.Note also that the 20 Newsgroup Dataset covers events in the early to mid 1990s, so references to recent events will not exist. Space Question
###Code
answers = qa.ask('When did the Cassini probe launch?')
qa.display_answers(answers[:5])
###Output
_____no_output_____
###Markdown
As you can see, the top candidate answer indicates that the Cassini space probe was launched in October of 1997, which appears to be correct. The correct answer will not always be the top answer, but it is in this case. Note that, since we used `index_from_list` to index documents, the last column (i.e., **Document Reference**) shows the list index associated with the newsgroup posting containing the answer, which can be used to peruse the entire document containing the answer. If using `index_from_folder` to index documents, the last column will show the relative path and filename of the document. The **Document Reference** values can be customized by supplying a `references` parameter to `index_from_list`.To see the text of the document that contains the top answer, uncomment and execute the following line (it's a comparatively long post).
###Code
#print(docs[59])
###Output
_____no_output_____
###Markdown
The 20 Newsgroup dataset contains lots of posts discussing and debating religions like Christianity and Islam, as well. Let's ask a question on this subject. Religious Question
###Code
answers = qa.ask('Who was Muhammad?')
qa.display_answers(answers[:5])
###Output
_____no_output_____
###Markdown
Here, we see different views on who Muhammad, the founder of Islam, as debated and discussed in this document set. Finally, the 20 Newsgroup dataset also contains many groups about computing hardware and software. Let's ask a technical support question. Technical Question
###Code
answers = qa.ask('What causes computer images to be too dark?')
qa.display_answers(answers[:5])
###Output
_____no_output_____
###Markdown
Building an End-to-End Question-Answering System With BERTIn this notebook, we build a practical, end-to-end Question-Answering (QA) system with BERT in rougly 3 lines of code. We will treat a corpus of text documents as a knowledge base to which we can ask questions and retrieve exact answers using [BERT](https://arxiv.org/abs/1810.04805). This goes beyond simplistic keyword searches.For this example, we will use the [20 Newsgroup dataset](http://qwone.com/~jason/20Newsgroups/) as the text corpus. As a collection of newsgroup postings which contains an abundance of opinions and debates, the corpus is not ideal as a knowledgebase. It is better to use fact-based documents such as Wikipedia articles or even news articles. However, this dataset will suffice for this example.Let us begin by loading the dataset into an array using **scikit-learn** and importing *ktrain* modules.
###Code
# load 20newsgroups datset into an array
from sklearn.datasets import fetch_20newsgroups
remove = ('headers', 'footers', 'quotes')
newsgroups_train = fetch_20newsgroups(subset='train', remove=remove)
newsgroups_test = fetch_20newsgroups(subset='test', remove=remove)
docs = newsgroups_train.data + newsgroups_test.data
import ktrain
from ktrain import text
###Output
_____no_output_____
###Markdown
STEP 1: Index the DocumentsWe will first index the documents into a search engine that will be used to quickly retrieve documents that are likely to contain answers to a question. To do so, we must choose an index location, which must be a folder that does not already exist. Since the newsgroup postings are small and fit in memory, we wil set `commit_every` to a large value to speed up the indexing process. This means results will not be written until the end. If you experience issues, you can lower this value.
###Code
INDEXDIR = '/tmp/myindex'
text.SimpleQA.initialize_index(INDEXDIR)
text.SimpleQA.index_from_list(docs, INDEXDIR, commit_every=len(docs))
###Output
_____no_output_____
###Markdown
For documents sets that are too large to be loaded into a Python list, you can use `SimpleQA.index_from_folder`, which will crawl a folder and index all plain text documents (e.g.,, `.txt` files).By default, `index_from_list` and `index_from_folder` use a single processor (`procs=1`) with each processor using a maximum of 256MB of memory (`limitmb=256`) and merging results into a single segment (`multisegment=False`). These values can be changed to speedup indexing as arguments to `index_from_list` or `index_from_folder`. See the [whoosh documentation](https://whoosh.readthedocs.io/en/latest/batch.html) for more information on these parameters and how to use them to speedup indexing.Note that a small number of large documents will cause inferences in STEP 3 to be very slow. If your dataset consists of large documents (e.g., books or long papers), we recommend breaking them up into pages (e.g., splitting the original PDF using something like `pdfseparate`) or splitting them into paragraphs. The latter can be done with *ktrain* using:```pythonktrain.text.textutils.paragraph_tokenize(document, join_sentences=True)```The above steps need to only be performed once. Once an index is already created, you can skip this step and proceed directly to **STEP 2** to begin using your system. STEP 2: Create a QA instanceNext, we create a QA instance. This step will automatically download the BERT SQUAD model if it does not already exist on your system.
###Code
qa = text.SimpleQA(INDEXDIR)
###Output
_____no_output_____
###Markdown
That's it! In roughly **3 lines of code**, we have built an end-to-end QA system that can now be used to generate answers to questions. Let's ask our system some questions. STEP 3: Ask QuestionsWe will invoke the `ask` method to issue questions to the text corpus we indexed and retrieve answers. We will also use the `qa.display` method to nicely display the top 5 results in this Jupyter notebook. The answers are inferred using a BERT model trained on the SQUAD dataset. Since the model is combing through paragraphs and sentences to find an answer, it may take a minute or two to return results.Note also that the 20 Newsgroup Dataset covers events in the early to mid 1990s, so references to recent events will not exist. Space Question
###Code
answers = qa.ask('When did the Cassini probe launch?')
qa.display_answers(answers[:5])
###Output
_____no_output_____
###Markdown
As you can see, the top candidate answer indicates that the Cassini space probe was launched in October of 1997, which appears to be correct. The correct answer will not always be the top answer, but it is in this case. Note that, since we used `index_from_list` to index documents, the last column shows the list index associated with the newsgroup posting containing the answer, which can be used to peruse the entire document containing the answer. If using `index_from_folder` to index documents, the last column will show the relative path and filename of the document.
###Code
print(docs[59])
###Output
Archive-name: space/new_probes
Last-modified: $Date: 93/04/01 14:39:17 $
UPCOMING PLANETARY PROBES - MISSIONS AND SCHEDULES
Information on upcoming or currently active missions not mentioned below
would be welcome. Sources: NASA fact sheets, Cassini Mission Design
team, ISAS/NASDA launch schedules, press kits.
ASUKA (ASTRO-D) - ISAS (Japan) X-ray astronomy satellite, launched into
Earth orbit on 2/20/93. Equipped with large-area wide-wavelength (1-20
Angstrom) X-ray telescope, X-ray CCD cameras, and imaging gas
scintillation proportional counters.
CASSINI - Saturn orbiter and Titan atmosphere probe. Cassini is a joint
NASA/ESA project designed to accomplish an exploration of the Saturnian
system with its Cassini Saturn Orbiter and Huygens Titan Probe. Cassini
is scheduled for launch aboard a Titan IV/Centaur in October of 1997.
After gravity assists of Venus, Earth and Jupiter in a VVEJGA
trajectory, the spacecraft will arrive at Saturn in June of 2004. Upon
arrival, the Cassini spacecraft performs several maneuvers to achieve an
orbit around Saturn. Near the end of this initial orbit, the Huygens
Probe separates from the Orbiter and descends through the atmosphere of
Titan. The Orbiter relays the Probe data to Earth for about 3 hours
while the Probe enters and traverses the cloudy atmosphere to the
surface. After the completion of the Probe mission, the Orbiter
continues touring the Saturnian system for three and a half years. Titan
synchronous orbit trajectories will allow about 35 flybys of Titan and
targeted flybys of Iapetus, Dione and Enceladus. The objectives of the
mission are threefold: conduct detailed studies of Saturn's atmosphere,
rings and magnetosphere; conduct close-up studies of Saturn's
satellites, and characterize Titan's atmosphere and surface.
One of the most intriguing aspects of Titan is the possibility that its
surface may be covered in part with lakes of liquid hydrocarbons that
result from photochemical processes in its upper atmosphere. These
hydrocarbons condense to form a global smog layer and eventually rain
down onto the surface. The Cassini orbiter will use onboard radar to
peer through Titan's clouds and determine if there is liquid on the
surface. Experiments aboard both the orbiter and the entry probe will
investigate the chemical processes that produce this unique atmosphere.
The Cassini mission is named for Jean Dominique Cassini (1625-1712), the
first director of the Paris Observatory, who discovered several of
Saturn's satellites and the major division in its rings. The Titan
atmospheric entry probe is named for the Dutch physicist Christiaan
Huygens (1629-1695), who discovered Titan and first described the true
nature of Saturn's rings.
Key Scheduled Dates for the Cassini Mission (VVEJGA Trajectory)
-------------------------------------------------------------
10/06/97 - Titan IV/Centaur Launch
04/21/98 - Venus 1 Gravity Assist
06/20/99 - Venus 2 Gravity Assist
08/16/99 - Earth Gravity Assist
12/30/00 - Jupiter Gravity Assist
06/25/04 - Saturn Arrival
01/09/05 - Titan Probe Release
01/30/05 - Titan Probe Entry
06/25/08 - End of Primary Mission
(Schedule last updated 7/22/92)
GALILEO - Jupiter orbiter and atmosphere probe, in transit. Has returned
the first resolved images of an asteroid, Gaspra, while in transit to
Jupiter. Efforts to unfurl the stuck High-Gain Antenna (HGA) have
essentially been abandoned. JPL has developed a backup plan using data
compression (JPEG-like for images, lossless compression for data from
the other instruments) which should allow the mission to achieve
approximately 70% of its original objectives.
Galileo Schedule
----------------
10/18/89 - Launch from Space Shuttle
02/09/90 - Venus Flyby
10/**/90 - Venus Data Playback
12/08/90 - 1st Earth Flyby
05/01/91 - High Gain Antenna Unfurled
07/91 - 06/92 - 1st Asteroid Belt Passage
10/29/91 - Asteroid Gaspra Flyby
12/08/92 - 2nd Earth Flyby
05/93 - 11/93 - 2nd Asteroid Belt Passage
08/28/93 - Asteroid Ida Flyby
07/02/95 - Probe Separation
07/09/95 - Orbiter Deflection Maneuver
12/95 - 10/97 - Orbital Tour of Jovian Moons
12/07/95 - Jupiter/Io Encounter
07/18/96 - Ganymede
09/28/96 - Ganymede
12/12/96 - Callisto
01/23/97 - Europa
02/28/97 - Ganymede
04/22/97 - Europa
05/31/97 - Europa
10/05/97 - Jupiter Magnetotail Exploration
HITEN - Japanese (ISAS) lunar probe launched 1/24/90. Has made
multiple lunar flybys. Released Hagoromo, a smaller satellite,
into lunar orbit. This mission made Japan the third nation to
orbit a satellite around the Moon.
MAGELLAN - Venus radar mapping mission. Has mapped almost the entire
surface at high resolution. Currently (4/93) collecting a global gravity
map.
MARS OBSERVER - Mars orbiter including 1.5 m/pixel resolution camera.
Launched 9/25/92 on a Titan III/TOS booster. MO is currently (4/93) in
transit to Mars, arriving on 8/24/93. Operations will start 11/93 for
one martian year (687 days).
TOPEX/Poseidon - Joint US/French Earth observing satellite, launched
8/10/92 on an Ariane 4 booster. The primary objective of the
TOPEX/POSEIDON project is to make precise and accurate global
observations of the sea level for several years, substantially
increasing understanding of global ocean dynamics. The satellite also
will increase understanding of how heat is transported in the ocean.
ULYSSES- European Space Agency probe to study the Sun from an orbit over
its poles. Launched in late 1990, it carries particles-and-fields
experiments (such as magnetometer, ion and electron collectors for
various energy ranges, plasma wave radio receivers, etc.) but no camera.
Since no human-built rocket is hefty enough to send Ulysses far out of
the ecliptic plane, it went to Jupiter instead, and stole energy from
that planet by sliding over Jupiter's north pole in a gravity-assist
manuver in February 1992. This bent its path into a solar orbit tilted
about 85 degrees to the ecliptic. It will pass over the Sun's south pole
in the summer of 1993. Its aphelion is 5.2 AU, and, surprisingly, its
perihelion is about 1.5 AU-- that's right, a solar-studies spacecraft
that's always further from the Sun than the Earth is!
While in Jupiter's neigborhood, Ulysses studied the magnetic and
radiation environment. For a short summary of these results, see
*Science*, V. 257, p. 1487-1489 (11 September 1992). For gory technical
detail, see the many articles in the same issue.
OTHER SPACE SCIENCE MISSIONS (note: this is based on a posting by Ron
Baalke in 11/89, with ISAS/NASDA information contributed by Yoshiro
Yamada ([email protected]). I'm attempting to track changes based
on updated shuttle manifests; corrections and updates are welcome.
1993 Missions
o ALEXIS [spring, Pegasus]
ALEXIS (Array of Low-Energy X-ray Imaging Sensors) is to perform
a wide-field sky survey in the "soft" (low-energy) X-ray
spectrum. It will scan the entire sky every six months to search
for variations in soft-X-ray emission from sources such as white
dwarfs, cataclysmic variable stars and flare stars. It will also
search nearby space for such exotic objects as isolated neutron
stars and gamma-ray bursters. ALEXIS is a project of Los Alamos
National Laboratory and is primarily a technology development
mission that uses astrophysical sources to demonstrate the
technology. Contact project investigator Jeffrey J Bloch
([email protected]) for more information.
o Wind [Aug, Delta II rocket]
Satellite to measure solar wind input to magnetosphere.
o Space Radar Lab [Sep, STS-60 SRL-01]
Gather radar images of Earth's surface.
o Total Ozone Mapping Spectrometer [Dec, Pegasus rocket]
Study of Stratospheric ozone.
o SFU (Space Flyer Unit) [ISAS]
Conducting space experiments and observations and this can be
recovered after it conducts the various scientific and
engineering experiments. SFU is to be launched by ISAS and
retrieved by the U.S. Space Shuttle on STS-68 in 1994.
1994
o Polar Auroral Plasma Physics [May, Delta II rocket]
June, measure solar wind and ions and gases surrounding the
Earth.
o IML-2 (STS) [NASDA, Jul 1994 IML-02]
International Microgravity Laboratory.
o ADEOS [NASDA]
Advanced Earth Observing Satellite.
o MUSES-B (Mu Space Engineering Satellite-B) [ISAS]
Conducting research on the precise mechanism of space structure
and in-space astronomical observations of electromagnetic waves.
1995
LUNAR-A [ISAS]
Elucidating the crust structure and thermal construction of the
moon's interior.
Proposed Missions:
o Advanced X-ray Astronomy Facility (AXAF)
Possible launch from shuttle in 1995, AXAF is a space
observatory with a high resolution telescope. It would orbit for
15 years and study the mysteries and fate of the universe.
o Earth Observing System (EOS)
Possible launch in 1997, 1 of 6 US orbiting space platforms to
provide long-term data (15 years) of Earth systems science
including planetary evolution.
o Mercury Observer
Possible 1997 launch.
o Lunar Observer
Possible 1997 launch, would be sent into a long-term lunar
orbit. The Observer, from 60 miles above the moon's poles, would
survey characteristics to provide a global context for the
results from the Apollo program.
o Space Infrared Telescope Facility
Possible launch by shuttle in 1999, this is the 4th element of
the Great Observatories program. A free-flying observatory with
a lifetime of 5 to 10 years, it would observe new comets and
other primitive bodies in the outer solar system, study cosmic
birth formation of galaxies, stars and planets and distant
infrared-emitting galaxies
o Mars Rover Sample Return (MRSR)
Robotics rover would return samples of Mars' atmosphere and
surface to Earch for analysis. Possible launch dates: 1996 for
imaging orbiter, 2001 for rover.
o Fire and Ice
Possible launch in 2001, will use a gravity assist flyby of
Earth in 2003, and use a final gravity assist from Jupiter in
2005, where the probe will split into its Fire and Ice
components: The Fire probe will journey into the Sun, taking
measurements of our star's upper atmosphere until it is
vaporized by the intense heat. The Ice probe will head out
towards Pluto, reaching the tiny world for study by 2016.
###Markdown
The 20 Newsgroup dataset contains lots of posts discussing and debating Christianity, as well. Let's ask a question on this subject. Religious Question
###Code
answers = qa.ask('Who was Jesus?')
qa.display_answers(answers[:5])
###Output
_____no_output_____
###Markdown
Here, we see different views on who Jesus was as debated and discussed in this document set.Finally, the 20 Newsgroup dataset also contains many groups about computing hardware and software. Let's ask a technical support question. Technical Question
###Code
answers = qa.ask('What causes computer images to be too dark?')
qa.display_answers(answers[:5])
###Output
_____no_output_____
###Markdown
Building an End-to-End Question-Answering System With BERTIn this notebook, we build a practical, end-to-end Question-Answering (QA) system with BERT in rougly 3 lines of code. We will treat a corpus of text documents as a knowledge base to which we can ask questions and retrieve exact answers using [BERT](https://arxiv.org/abs/1810.04805). This goes beyond simplistic keyword searches.For this example, we will use the [20 Newsgroup dataset](http://qwone.com/~jason/20Newsgroups/) as the text corpus. As a collection of newsgroup postings which contains an abundance of opinions and debates, the corpus is not ideal as a knowledgebase. It is better to use fact-based documents such as Wikipedia articles or even news articles. However, this dataset will suffice for this example.Let us begin by loading the dataset into an array using **scikit-learn** and importing *ktrain* modules.
###Code
# load 20newsgroups datset into an array
from sklearn.datasets import fetch_20newsgroups
remove = ('headers', 'footers', 'quotes')
newsgroups_train = fetch_20newsgroups(subset='train', remove=remove)
newsgroups_test = fetch_20newsgroups(subset='test', remove=remove)
docs = newsgroups_train.data + newsgroups_test.data
import ktrain
from ktrain import text
###Output
_____no_output_____
###Markdown
STEP 1: Index the DocumentsWe will first index the documents into a search engine that will be used to quickly retrieve documents that are likely to contain answers to a question. To do so, we must choose an index location, which must be a folder that does not already exist. Since the newsgroup postings are small and fit in memory, we wil set `commit_every` to a large value to speed up the indexing process. This means results will not be written until the end. If you experience issues, you can lower this value.
###Code
INDEXDIR = '/tmp/myindex'
text.SimpleQA.initialize_index(INDEXDIR)
text.SimpleQA.index_from_list(docs, INDEXDIR, commit_every=len(docs))
###Output
_____no_output_____
###Markdown
For documents sets that are too large to be loaded into a Python list, you can use `SimpleQA.index_from_folder`, which will crawl a folder and index all plain text documents.By default, `index_from_list` and `index_from_folder` use a single processor (`procs=1`) with each processor using a maximum of 256MB of memory (`limitmb=256`) and merging results into a single segment (`multisegment=False`). These values can be changed to speedup indexing as arguments to `index_from_list` or `index_from_folder`. See the [whoosh documentation](https://whoosh.readthedocs.io/en/latest/batch.html) for more information on these parameters and how to use them to speedup indexing.Note that a small number of large documents will cause inferences in STEP 3 to be very slow. If your dataset consists of large documents (e.g., books or long papers), we recommend breaking them up into pages (e.g., splitting the original PDF using something like `pdfseparate`) or splitting them into paragraphs. The latter can be done with *ktrain* using:```pythonktrain.text.textutils.paragraph_tokenize(document, join_sentences=True)```The above steps need to only be performed once. Once an index is already created, you can skip this step and proceed directly to **STEP 2** to begin using your system. STEP 2: Create a QA instanceNext, we create a QA instance. This step will automatically download the BERT SQUAD model if it does not already exist on your system.
###Code
qa = text.SimpleQA(INDEXDIR)
###Output
_____no_output_____
###Markdown
That's it! In roughly **3 lines of code**, we have built an end-to-end QA system that can now be used to generate answers to questions. Let's ask our system some questions. STEP 3: Ask QuestionsWe will invoke the `ask` method to issue questions to the text corpus we indexed and retrieve answers. We will also use the `qa.display` method to nicely display the top 5 results in this Jupyter notebook. The answers are inferred using a BERT model trained on the SQUAD dataset. Since the model is combing through paragraphs and sentences to find an answer, it may take a minute or two to return results.Note also that the 20 Newsgroup Dataset covers events in the early to mid 1990s, so references to recent events will not exist. Space Question
###Code
answers = qa.ask('When did the Cassini probe launch?')
qa.display_answers(answers[:5])
###Output
_____no_output_____
###Markdown
As you can see, the top candidate answer indicates that the Cassini space probe was launched in October of 1997, which appears to be correct. The correct answer will not always be the top answer, but it is in this case. Note that, since we used `index_from_list` to index documents, the last column shows the list index associated with the newsgroup posting containing the answer, which can be used to peruse the entire document containing the answer. If using `index_from_folder` to index documents, the last column will show the relative path and filename of the document.
###Code
print(docs[59])
###Output
Archive-name: space/new_probes
Last-modified: $Date: 93/04/01 14:39:17 $
UPCOMING PLANETARY PROBES - MISSIONS AND SCHEDULES
Information on upcoming or currently active missions not mentioned below
would be welcome. Sources: NASA fact sheets, Cassini Mission Design
team, ISAS/NASDA launch schedules, press kits.
ASUKA (ASTRO-D) - ISAS (Japan) X-ray astronomy satellite, launched into
Earth orbit on 2/20/93. Equipped with large-area wide-wavelength (1-20
Angstrom) X-ray telescope, X-ray CCD cameras, and imaging gas
scintillation proportional counters.
CASSINI - Saturn orbiter and Titan atmosphere probe. Cassini is a joint
NASA/ESA project designed to accomplish an exploration of the Saturnian
system with its Cassini Saturn Orbiter and Huygens Titan Probe. Cassini
is scheduled for launch aboard a Titan IV/Centaur in October of 1997.
After gravity assists of Venus, Earth and Jupiter in a VVEJGA
trajectory, the spacecraft will arrive at Saturn in June of 2004. Upon
arrival, the Cassini spacecraft performs several maneuvers to achieve an
orbit around Saturn. Near the end of this initial orbit, the Huygens
Probe separates from the Orbiter and descends through the atmosphere of
Titan. The Orbiter relays the Probe data to Earth for about 3 hours
while the Probe enters and traverses the cloudy atmosphere to the
surface. After the completion of the Probe mission, the Orbiter
continues touring the Saturnian system for three and a half years. Titan
synchronous orbit trajectories will allow about 35 flybys of Titan and
targeted flybys of Iapetus, Dione and Enceladus. The objectives of the
mission are threefold: conduct detailed studies of Saturn's atmosphere,
rings and magnetosphere; conduct close-up studies of Saturn's
satellites, and characterize Titan's atmosphere and surface.
One of the most intriguing aspects of Titan is the possibility that its
surface may be covered in part with lakes of liquid hydrocarbons that
result from photochemical processes in its upper atmosphere. These
hydrocarbons condense to form a global smog layer and eventually rain
down onto the surface. The Cassini orbiter will use onboard radar to
peer through Titan's clouds and determine if there is liquid on the
surface. Experiments aboard both the orbiter and the entry probe will
investigate the chemical processes that produce this unique atmosphere.
The Cassini mission is named for Jean Dominique Cassini (1625-1712), the
first director of the Paris Observatory, who discovered several of
Saturn's satellites and the major division in its rings. The Titan
atmospheric entry probe is named for the Dutch physicist Christiaan
Huygens (1629-1695), who discovered Titan and first described the true
nature of Saturn's rings.
Key Scheduled Dates for the Cassini Mission (VVEJGA Trajectory)
-------------------------------------------------------------
10/06/97 - Titan IV/Centaur Launch
04/21/98 - Venus 1 Gravity Assist
06/20/99 - Venus 2 Gravity Assist
08/16/99 - Earth Gravity Assist
12/30/00 - Jupiter Gravity Assist
06/25/04 - Saturn Arrival
01/09/05 - Titan Probe Release
01/30/05 - Titan Probe Entry
06/25/08 - End of Primary Mission
(Schedule last updated 7/22/92)
GALILEO - Jupiter orbiter and atmosphere probe, in transit. Has returned
the first resolved images of an asteroid, Gaspra, while in transit to
Jupiter. Efforts to unfurl the stuck High-Gain Antenna (HGA) have
essentially been abandoned. JPL has developed a backup plan using data
compression (JPEG-like for images, lossless compression for data from
the other instruments) which should allow the mission to achieve
approximately 70% of its original objectives.
Galileo Schedule
----------------
10/18/89 - Launch from Space Shuttle
02/09/90 - Venus Flyby
10/**/90 - Venus Data Playback
12/08/90 - 1st Earth Flyby
05/01/91 - High Gain Antenna Unfurled
07/91 - 06/92 - 1st Asteroid Belt Passage
10/29/91 - Asteroid Gaspra Flyby
12/08/92 - 2nd Earth Flyby
05/93 - 11/93 - 2nd Asteroid Belt Passage
08/28/93 - Asteroid Ida Flyby
07/02/95 - Probe Separation
07/09/95 - Orbiter Deflection Maneuver
12/95 - 10/97 - Orbital Tour of Jovian Moons
12/07/95 - Jupiter/Io Encounter
07/18/96 - Ganymede
09/28/96 - Ganymede
12/12/96 - Callisto
01/23/97 - Europa
02/28/97 - Ganymede
04/22/97 - Europa
05/31/97 - Europa
10/05/97 - Jupiter Magnetotail Exploration
HITEN - Japanese (ISAS) lunar probe launched 1/24/90. Has made
multiple lunar flybys. Released Hagoromo, a smaller satellite,
into lunar orbit. This mission made Japan the third nation to
orbit a satellite around the Moon.
MAGELLAN - Venus radar mapping mission. Has mapped almost the entire
surface at high resolution. Currently (4/93) collecting a global gravity
map.
MARS OBSERVER - Mars orbiter including 1.5 m/pixel resolution camera.
Launched 9/25/92 on a Titan III/TOS booster. MO is currently (4/93) in
transit to Mars, arriving on 8/24/93. Operations will start 11/93 for
one martian year (687 days).
TOPEX/Poseidon - Joint US/French Earth observing satellite, launched
8/10/92 on an Ariane 4 booster. The primary objective of the
TOPEX/POSEIDON project is to make precise and accurate global
observations of the sea level for several years, substantially
increasing understanding of global ocean dynamics. The satellite also
will increase understanding of how heat is transported in the ocean.
ULYSSES- European Space Agency probe to study the Sun from an orbit over
its poles. Launched in late 1990, it carries particles-and-fields
experiments (such as magnetometer, ion and electron collectors for
various energy ranges, plasma wave radio receivers, etc.) but no camera.
Since no human-built rocket is hefty enough to send Ulysses far out of
the ecliptic plane, it went to Jupiter instead, and stole energy from
that planet by sliding over Jupiter's north pole in a gravity-assist
manuver in February 1992. This bent its path into a solar orbit tilted
about 85 degrees to the ecliptic. It will pass over the Sun's south pole
in the summer of 1993. Its aphelion is 5.2 AU, and, surprisingly, its
perihelion is about 1.5 AU-- that's right, a solar-studies spacecraft
that's always further from the Sun than the Earth is!
While in Jupiter's neigborhood, Ulysses studied the magnetic and
radiation environment. For a short summary of these results, see
*Science*, V. 257, p. 1487-1489 (11 September 1992). For gory technical
detail, see the many articles in the same issue.
OTHER SPACE SCIENCE MISSIONS (note: this is based on a posting by Ron
Baalke in 11/89, with ISAS/NASDA information contributed by Yoshiro
Yamada ([email protected]). I'm attempting to track changes based
on updated shuttle manifests; corrections and updates are welcome.
1993 Missions
o ALEXIS [spring, Pegasus]
ALEXIS (Array of Low-Energy X-ray Imaging Sensors) is to perform
a wide-field sky survey in the "soft" (low-energy) X-ray
spectrum. It will scan the entire sky every six months to search
for variations in soft-X-ray emission from sources such as white
dwarfs, cataclysmic variable stars and flare stars. It will also
search nearby space for such exotic objects as isolated neutron
stars and gamma-ray bursters. ALEXIS is a project of Los Alamos
National Laboratory and is primarily a technology development
mission that uses astrophysical sources to demonstrate the
technology. Contact project investigator Jeffrey J Bloch
([email protected]) for more information.
o Wind [Aug, Delta II rocket]
Satellite to measure solar wind input to magnetosphere.
o Space Radar Lab [Sep, STS-60 SRL-01]
Gather radar images of Earth's surface.
o Total Ozone Mapping Spectrometer [Dec, Pegasus rocket]
Study of Stratospheric ozone.
o SFU (Space Flyer Unit) [ISAS]
Conducting space experiments and observations and this can be
recovered after it conducts the various scientific and
engineering experiments. SFU is to be launched by ISAS and
retrieved by the U.S. Space Shuttle on STS-68 in 1994.
1994
o Polar Auroral Plasma Physics [May, Delta II rocket]
June, measure solar wind and ions and gases surrounding the
Earth.
o IML-2 (STS) [NASDA, Jul 1994 IML-02]
International Microgravity Laboratory.
o ADEOS [NASDA]
Advanced Earth Observing Satellite.
o MUSES-B (Mu Space Engineering Satellite-B) [ISAS]
Conducting research on the precise mechanism of space structure
and in-space astronomical observations of electromagnetic waves.
1995
LUNAR-A [ISAS]
Elucidating the crust structure and thermal construction of the
moon's interior.
Proposed Missions:
o Advanced X-ray Astronomy Facility (AXAF)
Possible launch from shuttle in 1995, AXAF is a space
observatory with a high resolution telescope. It would orbit for
15 years and study the mysteries and fate of the universe.
o Earth Observing System (EOS)
Possible launch in 1997, 1 of 6 US orbiting space platforms to
provide long-term data (15 years) of Earth systems science
including planetary evolution.
o Mercury Observer
Possible 1997 launch.
o Lunar Observer
Possible 1997 launch, would be sent into a long-term lunar
orbit. The Observer, from 60 miles above the moon's poles, would
survey characteristics to provide a global context for the
results from the Apollo program.
o Space Infrared Telescope Facility
Possible launch by shuttle in 1999, this is the 4th element of
the Great Observatories program. A free-flying observatory with
a lifetime of 5 to 10 years, it would observe new comets and
other primitive bodies in the outer solar system, study cosmic
birth formation of galaxies, stars and planets and distant
infrared-emitting galaxies
o Mars Rover Sample Return (MRSR)
Robotics rover would return samples of Mars' atmosphere and
surface to Earch for analysis. Possible launch dates: 1996 for
imaging orbiter, 2001 for rover.
o Fire and Ice
Possible launch in 2001, will use a gravity assist flyby of
Earth in 2003, and use a final gravity assist from Jupiter in
2005, where the probe will split into its Fire and Ice
components: The Fire probe will journey into the Sun, taking
measurements of our star's upper atmosphere until it is
vaporized by the intense heat. The Ice probe will head out
towards Pluto, reaching the tiny world for study by 2016.
###Markdown
The 20 Newsgroup dataset contains lots of posts discussing and debating Christianity, as well. Let's ask a question on this subject. Religious Question
###Code
answers = qa.ask('Who was Jesus Christ?')
qa.display_answers(answers[:5])
###Output
_____no_output_____
###Markdown
Here, we see different views on who Jesus was as debated and discussed in this document set.Finally, the 20 Newsgroup dataset also contains many groups about computing hardware and software. Let's ask a technical support question. Technical Question
###Code
answers = qa.ask('What causes computer images to be too dark?')
qa.display_answers(answers[:5])
###Output
_____no_output_____
###Markdown
Building an End-to-End Question-Answering System With BERTIn this notebook, we build a practical, end-to-end Question-Answering (QA) system with BERT in rougly 3 lines of code. We will treat a corpus of text documents as a knowledge base to which we can ask questions and retrieve exact answers using [BERT](https://arxiv.org/abs/1810.04805). This goes beyond simplistic keyword searches.For this example, we will use the [20 Newsgroup dataset](http://qwone.com/~jason/20Newsgroups/) as the text corpus. As a collection of newsgroup postings which contains an abundance of opinions and debates, the corpus is not ideal as a knowledgebase. It is better to use fact-based documents such as Wikipedia articles or even news articles. However, this dataset will suffice for this example.Let us begin by loading the dataset into an array using **scikit-learn** and importing *ktrain* modules.
###Code
# load 20newsgroups datset into an array
from sklearn.datasets import fetch_20newsgroups
remove = ('headers', 'footers', 'quotes')
newsgroups_train = fetch_20newsgroups(subset='train', remove=remove)
newsgroups_test = fetch_20newsgroups(subset='test', remove=remove)
docs = newsgroups_train.data + newsgroups_test.data
import ktrain
from ktrain import text
###Output
_____no_output_____
###Markdown
STEP 1: Index the DocumentsWe will first index the documents into a search engine that will be used to quickly retrieve documents that are likely to contain answers to a question. To do so, we must choose an index location, which must be a folder that does not already exist. Since the newsgroup postings are small and fit in memory, we wil set `commit_every` to a large value to speed up the indexing process. This means results will not be written until the end. If you experience issues, you can lower this value.
###Code
INDEXDIR = '/tmp/myindex'
text.SimpleQA.initialize_index(INDEXDIR)
text.SimpleQA.index_from_list(docs, INDEXDIR, commit_every=len(docs))
###Output
_____no_output_____
###Markdown
For documents sets that are too large to be loaded into a Python list, you can use `SimpleQA.index_from_folder`, which will crawl a folder and index all plain text documents (e.g.,, `.txt` files).By default, `index_from_list` and `index_from_folder` use a single processor (`procs=1`) with each processor using a maximum of 256MB of memory (`limitmb=256`) and merging results into a single segment (`multisegment=False`). These values can be changed to speedup indexing as arguments to `index_from_list` or `index_from_folder`. See the [whoosh documentation](https://whoosh.readthedocs.io/en/latest/batch.html) for more information on these parameters and how to use them to speedup indexing.Note that a small number of large documents will cause inferences in STEP 3 to be very slow. If your dataset consists of large documents (e.g., books or long papers), we recommend breaking them up into pages (e.g., splitting the original PDF using something like `pdfseparate`) or splitting them into paragraphs. The latter can be done with *ktrain* using:```pythonktrain.text.textutils.paragraph_tokenize(document, join_sentences=True)```The above steps need to only be performed once. Once an index is already created, you can skip this step and proceed directly to **STEP 2** to begin using your system. STEP 2: Create a QA instanceNext, we create a QA instance. This step will automatically download the BERT SQUAD model if it does not already exist on your system.
###Code
qa = text.SimpleQA(INDEXDIR)
###Output
_____no_output_____
###Markdown
That's it! In roughly **3 lines of code**, we have built an end-to-end QA system that can now be used to generate answers to questions. Let's ask our system some questions. STEP 3: Ask QuestionsWe will invoke the `ask` method to issue questions to the text corpus we indexed and retrieve answers. We will also use the `qa.display` method to nicely display the top 5 results in this Jupyter notebook. The answers are inferred using a BERT model trained on the SQUAD dataset. Since the model is combing through paragraphs and sentences to find an answer, it may take a minute or two to return results.Note also that the 20 Newsgroup Dataset covers events in the early to mid 1990s, so references to recent events will not exist. Space Question
###Code
answers = qa.ask('When did the Cassini probe launch?')
qa.display_answers(answers[:5])
###Output
_____no_output_____
###Markdown
As you can see, the top candidate answer indicates that the Cassini space probe was launched in October of 1997, which appears to be correct. The correct answer will not always be the top answer, but it is in this case. Note that, since we used `index_from_list` to index documents, the last column shows the list index associated with the newsgroup posting containing the answer, which can be used to peruse the entire document containing the answer. If using `index_from_folder` to index documents, the last column will show the relative path and filename of the document.
###Code
print(docs[59])
###Output
Archive-name: space/new_probes
Last-modified: $Date: 93/04/01 14:39:17 $
UPCOMING PLANETARY PROBES - MISSIONS AND SCHEDULES
Information on upcoming or currently active missions not mentioned below
would be welcome. Sources: NASA fact sheets, Cassini Mission Design
team, ISAS/NASDA launch schedules, press kits.
ASUKA (ASTRO-D) - ISAS (Japan) X-ray astronomy satellite, launched into
Earth orbit on 2/20/93. Equipped with large-area wide-wavelength (1-20
Angstrom) X-ray telescope, X-ray CCD cameras, and imaging gas
scintillation proportional counters.
CASSINI - Saturn orbiter and Titan atmosphere probe. Cassini is a joint
NASA/ESA project designed to accomplish an exploration of the Saturnian
system with its Cassini Saturn Orbiter and Huygens Titan Probe. Cassini
is scheduled for launch aboard a Titan IV/Centaur in October of 1997.
After gravity assists of Venus, Earth and Jupiter in a VVEJGA
trajectory, the spacecraft will arrive at Saturn in June of 2004. Upon
arrival, the Cassini spacecraft performs several maneuvers to achieve an
orbit around Saturn. Near the end of this initial orbit, the Huygens
Probe separates from the Orbiter and descends through the atmosphere of
Titan. The Orbiter relays the Probe data to Earth for about 3 hours
while the Probe enters and traverses the cloudy atmosphere to the
surface. After the completion of the Probe mission, the Orbiter
continues touring the Saturnian system for three and a half years. Titan
synchronous orbit trajectories will allow about 35 flybys of Titan and
targeted flybys of Iapetus, Dione and Enceladus. The objectives of the
mission are threefold: conduct detailed studies of Saturn's atmosphere,
rings and magnetosphere; conduct close-up studies of Saturn's
satellites, and characterize Titan's atmosphere and surface.
One of the most intriguing aspects of Titan is the possibility that its
surface may be covered in part with lakes of liquid hydrocarbons that
result from photochemical processes in its upper atmosphere. These
hydrocarbons condense to form a global smog layer and eventually rain
down onto the surface. The Cassini orbiter will use onboard radar to
peer through Titan's clouds and determine if there is liquid on the
surface. Experiments aboard both the orbiter and the entry probe will
investigate the chemical processes that produce this unique atmosphere.
The Cassini mission is named for Jean Dominique Cassini (1625-1712), the
first director of the Paris Observatory, who discovered several of
Saturn's satellites and the major division in its rings. The Titan
atmospheric entry probe is named for the Dutch physicist Christiaan
Huygens (1629-1695), who discovered Titan and first described the true
nature of Saturn's rings.
Key Scheduled Dates for the Cassini Mission (VVEJGA Trajectory)
-------------------------------------------------------------
10/06/97 - Titan IV/Centaur Launch
04/21/98 - Venus 1 Gravity Assist
06/20/99 - Venus 2 Gravity Assist
08/16/99 - Earth Gravity Assist
12/30/00 - Jupiter Gravity Assist
06/25/04 - Saturn Arrival
01/09/05 - Titan Probe Release
01/30/05 - Titan Probe Entry
06/25/08 - End of Primary Mission
(Schedule last updated 7/22/92)
GALILEO - Jupiter orbiter and atmosphere probe, in transit. Has returned
the first resolved images of an asteroid, Gaspra, while in transit to
Jupiter. Efforts to unfurl the stuck High-Gain Antenna (HGA) have
essentially been abandoned. JPL has developed a backup plan using data
compression (JPEG-like for images, lossless compression for data from
the other instruments) which should allow the mission to achieve
approximately 70% of its original objectives.
Galileo Schedule
----------------
10/18/89 - Launch from Space Shuttle
02/09/90 - Venus Flyby
10/**/90 - Venus Data Playback
12/08/90 - 1st Earth Flyby
05/01/91 - High Gain Antenna Unfurled
07/91 - 06/92 - 1st Asteroid Belt Passage
10/29/91 - Asteroid Gaspra Flyby
12/08/92 - 2nd Earth Flyby
05/93 - 11/93 - 2nd Asteroid Belt Passage
08/28/93 - Asteroid Ida Flyby
07/02/95 - Probe Separation
07/09/95 - Orbiter Deflection Maneuver
12/95 - 10/97 - Orbital Tour of Jovian Moons
12/07/95 - Jupiter/Io Encounter
07/18/96 - Ganymede
09/28/96 - Ganymede
12/12/96 - Callisto
01/23/97 - Europa
02/28/97 - Ganymede
04/22/97 - Europa
05/31/97 - Europa
10/05/97 - Jupiter Magnetotail Exploration
HITEN - Japanese (ISAS) lunar probe launched 1/24/90. Has made
multiple lunar flybys. Released Hagoromo, a smaller satellite,
into lunar orbit. This mission made Japan the third nation to
orbit a satellite around the Moon.
MAGELLAN - Venus radar mapping mission. Has mapped almost the entire
surface at high resolution. Currently (4/93) collecting a global gravity
map.
MARS OBSERVER - Mars orbiter including 1.5 m/pixel resolution camera.
Launched 9/25/92 on a Titan III/TOS booster. MO is currently (4/93) in
transit to Mars, arriving on 8/24/93. Operations will start 11/93 for
one martian year (687 days).
TOPEX/Poseidon - Joint US/French Earth observing satellite, launched
8/10/92 on an Ariane 4 booster. The primary objective of the
TOPEX/POSEIDON project is to make precise and accurate global
observations of the sea level for several years, substantially
increasing understanding of global ocean dynamics. The satellite also
will increase understanding of how heat is transported in the ocean.
ULYSSES- European Space Agency probe to study the Sun from an orbit over
its poles. Launched in late 1990, it carries particles-and-fields
experiments (such as magnetometer, ion and electron collectors for
various energy ranges, plasma wave radio receivers, etc.) but no camera.
Since no human-built rocket is hefty enough to send Ulysses far out of
the ecliptic plane, it went to Jupiter instead, and stole energy from
that planet by sliding over Jupiter's north pole in a gravity-assist
manuver in February 1992. This bent its path into a solar orbit tilted
about 85 degrees to the ecliptic. It will pass over the Sun's south pole
in the summer of 1993. Its aphelion is 5.2 AU, and, surprisingly, its
perihelion is about 1.5 AU-- that's right, a solar-studies spacecraft
that's always further from the Sun than the Earth is!
While in Jupiter's neigborhood, Ulysses studied the magnetic and
radiation environment. For a short summary of these results, see
*Science*, V. 257, p. 1487-1489 (11 September 1992). For gory technical
detail, see the many articles in the same issue.
OTHER SPACE SCIENCE MISSIONS (note: this is based on a posting by Ron
Baalke in 11/89, with ISAS/NASDA information contributed by Yoshiro
Yamada ([email protected]). I'm attempting to track changes based
on updated shuttle manifests; corrections and updates are welcome.
1993 Missions
o ALEXIS [spring, Pegasus]
ALEXIS (Array of Low-Energy X-ray Imaging Sensors) is to perform
a wide-field sky survey in the "soft" (low-energy) X-ray
spectrum. It will scan the entire sky every six months to search
for variations in soft-X-ray emission from sources such as white
dwarfs, cataclysmic variable stars and flare stars. It will also
search nearby space for such exotic objects as isolated neutron
stars and gamma-ray bursters. ALEXIS is a project of Los Alamos
National Laboratory and is primarily a technology development
mission that uses astrophysical sources to demonstrate the
technology. Contact project investigator Jeffrey J Bloch
([email protected]) for more information.
o Wind [Aug, Delta II rocket]
Satellite to measure solar wind input to magnetosphere.
o Space Radar Lab [Sep, STS-60 SRL-01]
Gather radar images of Earth's surface.
o Total Ozone Mapping Spectrometer [Dec, Pegasus rocket]
Study of Stratospheric ozone.
o SFU (Space Flyer Unit) [ISAS]
Conducting space experiments and observations and this can be
recovered after it conducts the various scientific and
engineering experiments. SFU is to be launched by ISAS and
retrieved by the U.S. Space Shuttle on STS-68 in 1994.
1994
o Polar Auroral Plasma Physics [May, Delta II rocket]
June, measure solar wind and ions and gases surrounding the
Earth.
o IML-2 (STS) [NASDA, Jul 1994 IML-02]
International Microgravity Laboratory.
o ADEOS [NASDA]
Advanced Earth Observing Satellite.
o MUSES-B (Mu Space Engineering Satellite-B) [ISAS]
Conducting research on the precise mechanism of space structure
and in-space astronomical observations of electromagnetic waves.
1995
LUNAR-A [ISAS]
Elucidating the crust structure and thermal construction of the
moon's interior.
Proposed Missions:
o Advanced X-ray Astronomy Facility (AXAF)
Possible launch from shuttle in 1995, AXAF is a space
observatory with a high resolution telescope. It would orbit for
15 years and study the mysteries and fate of the universe.
o Earth Observing System (EOS)
Possible launch in 1997, 1 of 6 US orbiting space platforms to
provide long-term data (15 years) of Earth systems science
including planetary evolution.
o Mercury Observer
Possible 1997 launch.
o Lunar Observer
Possible 1997 launch, would be sent into a long-term lunar
orbit. The Observer, from 60 miles above the moon's poles, would
survey characteristics to provide a global context for the
results from the Apollo program.
o Space Infrared Telescope Facility
Possible launch by shuttle in 1999, this is the 4th element of
the Great Observatories program. A free-flying observatory with
a lifetime of 5 to 10 years, it would observe new comets and
other primitive bodies in the outer solar system, study cosmic
birth formation of galaxies, stars and planets and distant
infrared-emitting galaxies
o Mars Rover Sample Return (MRSR)
Robotics rover would return samples of Mars' atmosphere and
surface to Earch for analysis. Possible launch dates: 1996 for
imaging orbiter, 2001 for rover.
o Fire and Ice
Possible launch in 2001, will use a gravity assist flyby of
Earth in 2003, and use a final gravity assist from Jupiter in
2005, where the probe will split into its Fire and Ice
components: The Fire probe will journey into the Sun, taking
measurements of our star's upper atmosphere until it is
vaporized by the intense heat. The Ice probe will head out
towards Pluto, reaching the tiny world for study by 2016.
###Markdown
The 20 Newsgroup dataset contains lots of posts discussing and debating Christianity, as well. Let's ask a question on this subject. Religious Question
###Code
answers = qa.ask('Who was Jesus?')
qa.display_answers(answers[:5])
###Output
_____no_output_____
###Markdown
Here, we see different views on who Jesus was as debated and discussed in this document set.Finally, the 20 Newsgroup dataset also contains many groups about computing hardware and software. Let's ask a technical support question. Technical Question
###Code
answers = qa.ask('What causes computer images to be too dark?')
qa.display_answers(answers[:5])
###Output
_____no_output_____ |
jupyter/2018-02-13(BCPNN perfect - theory I, learning properties).ipynb | ###Markdown
BCPNN perfect II - Learning Properties
###Code
import pprint
import subprocess
import sys
sys.path.append('../')
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import matplotlib.gridspec as gridspec
from mpl_toolkits.axes_grid1 import make_axes_locatable
import seaborn as sns
%matplotlib inline
plt.rcParams['figure.figsize'] = (16, 12)
np.set_printoptions(suppress=True, precision=2)
sns.set(font_scale=3.5)
from network import Protocol, BCPNNModular, NetworkManager, BCPNNPerfect
from plotting_functions import plot_weight_matrix, plot_state_variables_vs_time, plot_winning_pattern
from plotting_functions import plot_network_activity, plot_network_activity_angle
from analysis_functions import calculate_recall_time_quantities, calculate_angle_from_history
from connectivity_functions import artificial_connectivity_matrix
def simple_bcpnn_theo_recall_time(tau_a, g_a, g_w, w_next, w_self):
delta_w = w_self - w_next
return tau_a * np.log(g_a / (g_a - g_w * delta_w))
###Output
_____no_output_____
###Markdown
An example General parameters
###Code
g_w_ampa = 2.0
g_w = 0.0
g_a = 10.0
tau_a = 0.250
G = 1.0
sigma = 0.0
# Patterns parameters
hypercolumns = 1
minicolumns = 10
n_patterns = 10
# Manager properties
dt = 0.001
values_to_save = ['o', 's', 'z_pre', 'z_post', 'a', 'i_ampa', 'i_nmda']
# Protocol
training_time = 0.100
inter_sequence_interval = 1.0
inter_pulse_interval = 0.0
epochs = 3
# Build the network
nn = BCPNNPerfect(hypercolumns, minicolumns, g_w_ampa=g_w_ampa, g_w=g_w, g_a=g_a, tau_a=tau_a,
sigma=sigma, G=G,
z_transfer=False, diagonal_zero=False, strict_maximum=True, perfect=True)
# Build the manager
manager = NetworkManager(nn=nn, dt=dt, values_to_save=values_to_save)
# Build the protocol for training
protocol = Protocol()
patterns_indexes = [i for i in range(n_patterns)]
protocol.simple_protocol(patterns_indexes, training_time=training_time, inter_pulse_interval=inter_pulse_interval,
inter_sequence_interval=inter_sequence_interval, epochs=epochs)
# Train
epoch_history = manager.run_network_protocol(protocol=protocol, verbose=True)
plot_weight_matrix(manager.nn, ampa=True)
T_recall = 2.0
T_cue = 0.100
sequences = [patterns_indexes]
I_cue = 0.0
n = 1
aux = calculate_recall_time_quantities(manager, T_recall, T_cue, n, sequences)
total_sequence_time, mean, std, success, timings = aux
plot_network_activity_angle(manager)
print('success', success)
###Output
success 100.0
###Markdown
An simple example of the weight evolution
###Code
tau_z_pre = 0.050
# Patterns parameters
hypercolumns = 1
minicolumns = 10
n_patterns = 10
# Manager properties
dt = 0.001
values_to_save = ['o', 's', 'z_pre', 'z_post', 'a', 'i_ampa', 'i_nmda']
# Protocol
training_time = 0.100
inter_sequence_interval = 0
inter_pulse_interval = 0.0
epochs = 1
# Build the network
nn = BCPNNPerfect(hypercolumns, minicolumns, g_w_ampa=g_w_ampa, g_w=g_w, g_a=g_a, tau_a=tau_a,
sigma=sigma, G=G, tau_z_pre=tau_z_pre,
z_transfer=False, diagonal_zero=False, strict_maximum=True, perfect=True)
# Build the manager
manager = NetworkManager(nn=nn, dt=dt, values_to_save=values_to_save)
nn.z_pre = np.zeros(nn.n_units)
# Build the protocol for training
protocol = Protocol()
patterns_indexes = [i for i in range(n_patterns)]
protocol.simple_protocol(patterns_indexes, training_time=training_time, inter_pulse_interval=inter_pulse_interval,
inter_sequence_interval=inter_sequence_interval, epochs=epochs)
# Train
epoch_history = manager.run_network_protocol(protocol=protocol, verbose=True)
o = manager.history['o']
z = manager.history['z_pre']
patterns = [3, 4]
linewidth = 10
time = np.arange(0, manager.T_total, dt)
fig = plt.figure(figsize=(16, 12))
ax1 = fig.add_subplot(211)
ax2 = fig.add_subplot(212)
ax1.plot(time, o[:, 3], linewidth=linewidth, ls='--', color='black', label='o_1')
ax1.plot(time, o[:, 4], linewidth=linewidth, ls='-', color='black', label='o_2')
y1 = z[:, 3]
y2 = z[:, 4]
ax2.plot(time, y1, linewidth=linewidth, ls='--', color='black', label=r'$z_{1}$')
ax2.plot(time, y2, linewidth=linewidth, ls='-', color='black', label=r'$z_{2}$')
z = y1 * y2
if True:
ax2.fill_between(time, z, 0, color='red', label='co-activation')
else:
ax2.fill_between(time, y1, 0, where=y1 <= y2, color='red', label='co-activation')
ax2.fill_between(time, y2, 0, where=y2 < y1, color='red');
ax2.legend()
if True:
ax1.axis('off')
ax2.axis('off');
###Output
_____no_output_____
###Markdown
Learning Here we need to show how the learning looks across time, important parameters are the trainign time and epochs. But first let's etxract the data of the pattern above* Training time* Epochs* Number of patterns* Number of minicolumns
###Code
from_pattern = 2
to_pattern = 3
def get_weights(manager, from_pattern, to_pattern):
w_self = manager.nn.w_ampa[from_pattern, from_pattern]
w_next = manager.nn.w_ampa[to_pattern, from_pattern]
w_rest = np.mean(nn.w_ampa[(to_pattern + 1):, from_pattern])
return w_self, w_next, w_rest
w_self, w_next, w_rest = get_weights(manager, from_pattern, to_pattern)
print('w self', w_self)
print('w_next', w_next)
print('w_rest', w_rest)
###Output
w self 0.587890794147
w_next -0.0735762330699
w_rest -0.0989181819004
###Markdown
General parameters
###Code
g_w_ampa = 2.0
g_w = 0.0
g_a = 10.0
tau_a = 0.250
G = 1.0
sigma = 0.0
# Patterns parameters
hypercolumns = 1
minicolumns = 10
n_patterns = 10
# Manager properties
dt = 0.001
values_to_save = ['o', 's']
# Protocol
training_time = 0.100
inter_sequence_interval = 1.0
inter_pulse_interval = 0.0
epochs = 3
markersize = 32
linewidth = 10
###Output
_____no_output_____
###Markdown
Training times
###Code
training_times_vector = np.arange(0.050, 2.050, 0.050)
w_self_vector_tt = np.zeros_like(training_times_vector)
w_next_vector_tt = np.zeros_like(training_times_vector)
w_rest_vector_tt = np.zeros_like(training_times_vector)
for index, training_time_ in enumerate(training_times_vector):
# Build the network
nn = BCPNNPerfect(hypercolumns, minicolumns, g_w_ampa=g_w_ampa, g_w=g_w, g_a=g_a, tau_a=tau_a,
sigma=sigma, G=G,
z_transfer=False, diagonal_zero=False, strict_maximum=True, perfect=True)
# Build the manager
manager = NetworkManager(nn=nn, dt=dt, values_to_save=values_to_save)
# Build the protocol for training
protocol = Protocol()
patterns_indexes = [i for i in range(n_patterns)]
protocol.simple_protocol(patterns_indexes, training_time=training_time_, inter_pulse_interval=inter_pulse_interval,
inter_sequence_interval=inter_sequence_interval, epochs=epochs)
# Train
epoch_history = manager.run_network_protocol(protocol=protocol, verbose=False)
w_self, w_next, w_rest = get_weights(manager, from_pattern, to_pattern)
w_self_vector_tt[index] = w_self
w_next_vector_tt[index] = w_next
w_rest_vector_tt[index] = w_rest
fig1 = plt.figure(figsize=(16, 12))
ax1 = fig1.add_subplot(111)
ax1.plot(training_times_vector, w_self_vector_tt, '*-', lw=linewidth, markersize=markersize, label=r'$w_{self}$')
ax1.plot(training_times_vector, w_next_vector_tt, '*-', lw=linewidth, markersize=markersize, label=r'$w_{next}$')
ax1.plot(training_times_vector, w_rest_vector_tt, '*-', lw=linewidth, markersize=markersize, label=r'$w_{rest}$')
ax1.set_xlabel('Trainign times (ms)')
ax1.set_ylabel('Weight')
ax1.axhline(0, ls='--', color='black')
ax1.axvline(0, ls='--', color='black')
ax1.legend();
###Output
_____no_output_____
###Markdown
Epochs
###Code
epochs_vector = np.arange(1, 50, 1, dtype='int')
w_self_vector_epochs = np.zeros_like(epochs_vector, dtype='float')
w_next_vector_epochs = np.zeros_like(epochs_vector, dtype='float')
w_rest_vector_epochs = np.zeros_like(epochs_vector, dtype='float')
for index, epochs_ in enumerate(epochs_vector):
# Build the network
nn = BCPNNPerfect(hypercolumns, minicolumns, g_w_ampa=g_w_ampa, g_w=g_w, g_a=g_a, tau_a=tau_a,
sigma=sigma, G=G,
z_transfer=False, diagonal_zero=False, strict_maximum=True, perfect=True)
# Build the manager
manager = NetworkManager(nn=nn, dt=dt, values_to_save=values_to_save)
# Build the protocol for training
protocol = Protocol()
patterns_indexes = [i for i in range(n_patterns)]
protocol.simple_protocol(patterns_indexes, training_time=training_time, inter_pulse_interval=inter_pulse_interval,
inter_sequence_interval=inter_sequence_interval, epochs=epochs_)
# Train
epoch_history = manager.run_network_protocol(protocol=protocol, verbose=False)
w_self, w_next, w_rest = get_weights(manager, from_pattern, to_pattern)
w_self_vector_epochs[index] = w_self
w_next_vector_epochs[index] = w_next
w_rest_vector_epochs[index] = w_rest
fig2 = plt.figure(figsize=(16, 12))
ax2 = fig2.add_subplot(111)
ax2.plot(epochs_vector, w_self_vector_epochs, '*-', lw=linewidth, markersize=markersize, label=r'$w_{self}$')
ax2.plot(epochs_vector, w_next_vector_epochs, '*-', lw=linewidth, markersize=markersize, label=r'$w_{next}$')
ax2.plot(epochs_vector, w_rest_vector_epochs, '*-', lw=linewidth, markersize=markersize, label=r'$w_{rest}$')
ax2.set_xlabel('Epochs')
ax2.set_ylabel('Weight')
ax2.axhline(0, ls='--', color='black')
ax2.axvline(0, ls='--', color='black')
ax2.legend();
###Output
_____no_output_____
###Markdown
Number of minicolumns
###Code
minicolumns_vector = np.arange(10, 55, 5, dtype='int')
w_self_vector_minicolumns = np.zeros_like(minicolumns_vector, dtype='float')
w_next_vector_minicolumns = np.zeros_like(minicolumns_vector, dtype='float')
w_rest_vector_minicolumns = np.zeros_like(minicolumns_vector, dtype='float')
for index, minicolumns_ in enumerate(minicolumns_vector):
# Build the network
nn = BCPNNPerfect(hypercolumns, minicolumns_, g_w_ampa=g_w_ampa, g_w=g_w, g_a=g_a, tau_a=tau_a,
sigma=sigma, G=G,
z_transfer=False, diagonal_zero=False, strict_maximum=True, perfect=True)
# Build the manager
manager = NetworkManager(nn=nn, dt=dt, values_to_save=values_to_save)
# Build the protocol for training
protocol = Protocol()
patterns_indexes = [i for i in range(minicolumns_)]
protocol.simple_protocol(patterns_indexes, training_time=training_time, inter_pulse_interval=inter_pulse_interval,
inter_sequence_interval=inter_sequence_interval, epochs=epochs)
# Train
epoch_history = manager.run_network_protocol(protocol=protocol, verbose=False)
w_self, w_next, w_rest = get_weights(manager, from_pattern, to_pattern)
w_self_vector_minicolumns[index] = w_self
w_next_vector_minicolumns[index] = w_next
w_rest_vector_minicolumns[index] = w_rest
fig3 = plt.figure(figsize=(16, 12))
ax3 = fig3.add_subplot(111)
ax3.plot(minicolumns_vector, w_self_vector_minicolumns, '*-', lw=linewidth, markersize=markersize, label=r'$w_{self}$')
ax3.plot(minicolumns_vector, w_next_vector_minicolumns, '*-', lw=linewidth, markersize=markersize, label=r'$w_{next}$')
ax3.plot(minicolumns_vector, w_rest_vector_minicolumns, '*-', lw=linewidth, markersize=markersize, label=r'$w_{rest}$')
ax3.set_xlabel('Minicolumns')
ax3.set_ylabel('Weight')
ax3.axhline(0, ls='--', color='black')
ax3.axvline(0, ls='--', color='black')
ax3.legend();
###Output
_____no_output_____
###Markdown
Number of patterns
###Code
n_patterns_vector = np.arange(10, 55, 5, dtype='int')
w_self_vector_patterns = np.zeros_like(n_patterns_vector, dtype='float')
w_next_vector_patterns = np.zeros_like(n_patterns_vector, dtype='float')
w_rest_vector_patterns = np.zeros_like(n_patterns_vector, dtype='float')
for index, n_patterns_ in enumerate(n_patterns_vector):
# Build the network
nn = BCPNNPerfect(hypercolumns, n_patterns_vector[-1], g_w_ampa=g_w_ampa, g_w=g_w, g_a=g_a, tau_a=tau_a,
sigma=sigma, G=G,
z_transfer=False, diagonal_zero=False, strict_maximum=True, perfect=True)
# Build the manager
manager = NetworkManager(nn=nn, dt=dt, values_to_save=values_to_save)
# Build the protocol for training
protocol = Protocol()
patterns_indexes = [i for i in range(n_patterns_)]
protocol.simple_protocol(patterns_indexes, training_time=training_time, inter_pulse_interval=inter_pulse_interval,
inter_sequence_interval=inter_sequence_interval, epochs=epochs)
# Train
epoch_history = manager.run_network_protocol(protocol=protocol, verbose=False)
w_self, w_next, w_rest = get_weights(manager, from_pattern, to_pattern)
w_self_vector_patterns[index] = w_self
w_next_vector_patterns[index] = w_next
w_rest_vector_patterns[index] = w_rest
fig4 = plt.figure(figsize=(16, 12))
ax4 = fig4.add_subplot(111)
ax4.plot(n_patterns_vector, w_self_vector_patterns, '*-', lw=linewidth, markersize=markersize, label=r'$w_{self}$')
ax4.plot(n_patterns_vector, w_next_vector_patterns, '*-', lw=linewidth, markersize=markersize, label=r'$w_{next}$')
ax4.plot(n_patterns_vector, w_rest_vector_patterns, '*-', lw=linewidth, markersize=markersize, label=r'$w_{rest}$')
ax4.set_xlabel('Minicolumns')
ax4.set_ylabel('Weight')
ax4.axhline(0, ls='--', color='black')
ax4.axvline(0, ls='--', color='black')
ax4.legend();
###Output
_____no_output_____ |
analysis/milestone1.ipynb | ###Markdown
MyAnimeList Recommendations Database analysis Step 1 : Loading the DataLet's import all the modules we need for the anaysis and start loading the files we need into `pandas` dataframes from their respective `.csv` files.First up is the **Anime metadata**, located in `../data/raw/anime.csv`:
###Code
import pandas as pd
import numpy as np
anime_df = pd.read_csv('../data/raw/anime.csv')
anime_df.head()
###Output
_____no_output_____
###Markdown
Looking good!Now let's do the same for the **User Ratings Data**, located in `../data/raw/rating.csv`:
###Code
ratings_df = pd.read_csv('../data/raw/rating.csv')
ratings_df.head()
###Output
_____no_output_____
###Markdown
Milestone 1
###Code
# Importing modules and frameworks
import pandas as pd
import os
# Getting data
directory = "/home/yohen/Documents/Github/course-project-solo_331/data/raw/"
os.chdir(directory)
files = os.listdir()
# Loading data into pandas.df
print(directory+files[0])
covid_19_india = pd.read_csv(directory+files[0])
statewise_tests = pd.read_csv(directory+files[1])
###Output
/home/yohen/Documents/Github/course-project-solo_331/data/raw/StatewiseTestingDetails.csv
###Markdown
Milestone 1Load games.csv from data/raw into a Pandas dataframe.
###Code
import pandas as pd
df = pd.read_csv("../data/raw/games.csv")
print(df.head())
###Output
id rated created_at last_move_at turns victory_status winner \
0 TZJHLljE False 1.504210e+12 1.504210e+12 13 outoftime white
1 l1NXvwaE True 1.504130e+12 1.504130e+12 16 resign black
2 mIICvQHh True 1.504130e+12 1.504130e+12 61 mate white
3 kWKvrqYL True 1.504110e+12 1.504110e+12 61 mate white
4 9tXo1AUZ True 1.504030e+12 1.504030e+12 95 mate white
increment_code white_id white_rating black_id black_rating \
0 15+2 bourgris 1500 a-00 1191
1 5+10 a-00 1322 skinnerua 1261
2 5+10 ischia 1496 a-00 1500
3 20+0 daniamurashov 1439 adivanov2009 1454
4 30+3 nik221107 1523 adivanov2009 1469
moves opening_eco \
0 d4 d5 c4 c6 cxd5 e6 dxe6 fxe6 Nf3 Bb4+ Nc3 Ba5... D10
1 d4 Nc6 e4 e5 f4 f6 dxe5 fxe5 fxe5 Nxe5 Qd4 Nc6... B00
2 e4 e5 d3 d6 Be3 c6 Be2 b5 Nd2 a5 a4 c5 axb5 Nc... C20
3 d4 d5 Nf3 Bf5 Nc3 Nf6 Bf4 Ng4 e3 Nc6 Be2 Qd7 O... D02
4 e4 e5 Nf3 d6 d4 Nc6 d5 Nb4 a3 Na6 Nc3 Be7 b4 N... C41
opening_name opening_ply
0 Slav Defense: Exchange Variation 5
1 Nimzowitsch Defense: Kennedy Variation 4
2 King's Pawn Game: Leonardis Variation 3
3 Queen's Pawn Game: Zukertort Variation 3
4 Philidor Defense 5
|
sorthingAlgo.ipynb | ###Markdown
Using Bubble sort Algorithm
###Code
length = len(array)
for i in range(0,length-1):
for j in range(length-i-1):
if array[j] <array[j+1]:
array[j+1],array[j] = array[j],array[j+1]
## TO Print All the replacements done
print("Inside If =",array)
print(array)
###Output
Inside If = [2, 1, 1, 25, 51, 5]
Inside If = [2, 1, 25, 1, 51, 5]
Inside If = [2, 1, 25, 51, 1, 5]
Inside If = [2, 1, 25, 51, 5, 1]
Inside If = [2, 25, 1, 51, 5, 1]
Inside If = [2, 25, 51, 1, 5, 1]
Inside If = [2, 25, 51, 5, 1, 1]
Inside If = [25, 2, 51, 5, 1, 1]
Inside If = [25, 51, 2, 5, 1, 1]
Inside If = [25, 51, 5, 2, 1, 1]
Inside If = [51, 25, 5, 2, 1, 1]
[51, 25, 5, 2, 1, 1]
|
.ipynb_checkpoints/Bimodel Test-checkpoint.ipynb | ###Markdown
|species|spec_as_int||---|---||acerifolia_x|1||aestivalis_x|2||cinerea_x|3||labrusca_x|4||palmata_x|5||riparia_x|6||rupestris_x|7||vulpina_x|8||acerifolia_y|9||aestivalis_y|10||cinerea_y|11||labrusca_y|12||palmata_y|13||riparia_y|14||rupestris_y|15||vulpina_y|16|acerifolia_z|17||aestivalis_z|18||cinerea_z|19||labrusca_z|20||palmata_z|21||riparia_z|22||rupestris_z|23||vulpina_z|24|
###Code
table <- table(Predicted=predictions$class, Species=test.data$spec_as_int)
print(confusionMatrix(table))
###Output
_____no_output_____ |
AlDa/blatt6/Exercise06.ipynb | ###Markdown
Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All).Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE", as well as your name and collaborators below:
###Code
NAME = "Maryna Charniuk"
COLLABORATORS = "Dung Nguyen, Lyubomira Dimitrova"
###Output
_____no_output_____
###Markdown
--- HASHING Schreiben Sie eine einfache Hash-Funktion `my_hash(string, s)` mit der Tabellengröße $s$ mittels Modulo Operation (auch bekannt als *Lineares Sondieren* oder *division remainder hashing*. (4 Punkte)
###Code
def my_hash(string, s):
sum_chars = 0
for char in string:
sum_chars += ord(char)
return sum_chars % s
###Output
_____no_output_____
###Markdown
Berechnen Sie mittels der von Ihnen geschriebene Funhtion die Hashwerte der Zeichenketten **done** und **node** mit einer Tabellengröße von $s = 3$. (3 Punkte)
###Code
print(my_hash('done', 3))
print(my_hash('node', 3))
assert my_hash('done', 3) == my_hash('node', 3)
###Output
2
2
###Markdown
Ist der Hash beider Zeichenketten der Gleiche? Nennen Sie den Grund. (3. Punkte) Ja, weil wir die ASCII Werten aller Zeichen im String addieren, also spielt die Reihenfolge der Zeichen keine Rolle. Um das zu vermeiden, könnte man jedem Zeichen ein Gewicht geben, z.B. die Position im String. Eine weit verbreitete Methode zur Kollisionsvermeidung ist die *Lineare Suche (linear probing)*. Erzeugen Sie eine Hash-Tabelle der Größe $s = 11$ in der Form eines Assoziativen Datenfelds (Dictionary) für die Liste `lisf = [10,45,43,76,57,12,77,13]` Die Hash-Tabelle sollte wie folgt aussehen: `table = {0: 76, 1: 12, 2: 45, 3: 10, 4: 43, 5: 77, 6: 13, 7: None, 8: None, 9: None, 10: 57}`. (10 Punkte)
###Code
# first we need to modify the hash function to handle integers
def my_hash(item, s):
if isinstance(item, int):
return item % s
if isinstance(item, str):
sum_chars = 0
for char in item:
sum_chars += ord(char)
return sum_chars % s
def linearProbingHash(item_list, hash_table_size):
table = {i: None for i in range(hash_table_size)}
for value in item_list:
key = my_hash(value, hash_table_size)
if None in table.values(): # check if there are any free spaces in the hash table
while table[key]:
if key == hash_table_size - 1: # make table circular (if you reach the end, go to 0)
key = -1
key += 1
else:
raise RuntimeError("Hash-Table too small.")
table[key] = value
return table
lisf = [10,45,43,76,57,12,77,13]
print(linearProbingHash(lisf, 11))
###Output
{0: 43, 1: 45, 2: 76, 3: 57, 4: 12, 5: 77, 6: 13, 7: None, 8: None, 9: None, 10: 10}
###Markdown
Erzeugen Sie eine Hash-Tabelle der Größe $s = 11$ in Form eines Assoziativen Datenfelds für die Liste `list = ["his", "her", "this", "that", "what", "when", "how", "why", "i dont know"].`Die Hash-Tabelle sollte aussehen wie folgt: `table = {0: 'her', 1: 'this', ... , 10: None}`. (10 Punkte)
###Code
def linearProbingHashStrings(item_list, hash_table_size):
table = {i: None for i in range(hash_table_size)}
for value in item_list:
key = my_hash(value, hash_table_size)
if None in table.values(): # check if there are any free spaces in the hash table
while table[key]:
if key == hash_table_size - 1: # make table circular (if you reach the end, go to 0)
key = -1
key += 1
else:
raise RuntimeError("Hash-Table too small.")
table[key] = value
return table
stringList = ["his", "her", "this", "that", "what", "when", "how", "why", "i dont know"]
print(linearProbingHashStrings(stringList, 11))
#print(linearProbingHashStrings(stringList, 7)) # raises a RuntimeError because 7 < len(list)
###Output
{0: 'her', 1: 'this', 2: None, 3: 'why', 4: 'that', 5: 'his', 6: 'when', 7: 'what', 8: 'how', 9: 'i dont know', 10: None}
###Markdown
Schreiben Sie eine Funktien `all_cocktails(filename)`, die die Datei `cocktails.json` in ein Assozietives Datenfeld `recipes` liest und schreiben Sie eine Funktion `all_ingredients(recipes)`, die eine komplette Liste aller Zutaten ausgibt. (10 Punkte)
###Code
import json
def all_cocktails(filename):
with open(filename) as f:
j = json.load(f)
return {i: cocktail for i, cocktail in enumerate(j['cocktails'])}
def all_ingredients(recipes):
ingredients = set()
for r in recipes.values():
for listing in r['ingredients']:
try:
ingredients.add(listing['ingredient'])
except KeyError: # not all children of 'ingredients' contain 'ingredient'
pass
return ingredients
recipes = all_cocktails('cocktails.json')
numberOfIngredients = len(all_ingredients(recipes))
print (all_ingredients(recipes))
print (numberOfIngredients)
recipes = all_cocktails('cocktails.json')
assert (numberOfIngredients) == (37)
assert ('Apricot brandy' in all_ingredients(recipes))
assert ('Pineapple juice' in all_ingredients(recipes))
assert ('Campari' in all_ingredients(recipes))
assert ('Kirsch' in all_ingredients(recipes))
assert ('Pisco' in all_ingredients(recipes))
###Output
_____no_output_____ |
examples/reference/templates/GoldenLayout.ipynb | ###Markdown
For a large variety of use cases we do not need complete control over the exact layout of each individual component on the page, as could be achieved with a [custom template](../../user_guide/Templates.ipynb), we just want to achieve a more polished look and feel. For these cases Panel ships with a number of default templates, which are defined by declaring four main content areas on the page, which can be populated as desired:* **`header`**: The header area of the HTML page* **`sidebar`**: A collapsible sidebar* **`main`**: The main area of the application* **`modal`**: A modal area which can be opened and closed from PythonThese four areas behave very similarly to other Panel layout components and have list-like semantics. This means we can easily append new components into these areas. Unlike other layout components however, the contents of the areas is fixed once rendered. If you need a dynamic layout you should therefore insert a regular Panel layout component (e.g. a `Column` or `Row`) and modify it in place once added to one of the content areas. Templates can allow for us to quickly and easily create web apps for displaying our data. Panel comes with a default Template, and includes multiple Templates that extend the default which add some customization for a better display. Parameters:In addition to the four different areas we can populate the default templates also provide a few additional parameters:* **`busy_indicator`** (BooleanIndicator): Visual indicator of application busy state.* **`header_background`** (str): Optional header background color override.* **`header_color`** (str): Optional header text color override.* **`logo`** (str): URI of logo to add to the header (if local file, logo is base64 encoded as URI).* **`site`** (str): Name of the site. Will be shown in the header. Default is '', i.e. not shown.* **`site_url`** (str): Url of the site and logo. Default is "/".* **`title`** (str): A title to show in the header.* **`theme`** (Theme): A Theme class (available in `panel.template.theme`)* **`sidebar_width`** (int): The width of the sidebar in percent. Default is 20.________ In this case we are using the `GoldenTemplate`, built using the [Golden Layout CSS](https://golden-layout.com/), which allows for the creation of tabs that can be moved around. Due to the movable tabs this Template is a little different than the others. The sidebar works similarly to the other templates, but to have your displays render in different tabs, we have to make separate calls to `.main.append()`. Here is an example of how you can set up a display using this template:
###Code
golden = pn.template.GoldenTemplate(title='Golden Template')
xs = np.linspace(0, np.pi)
freq = pn.widgets.FloatSlider(name="Frequency", start=0, end=10, value=2)
phase = pn.widgets.FloatSlider(name="Phase", start=0, end=np.pi)
@pn.depends(freq=freq, phase=phase)
def sine(freq, phase):
return hv.Curve((xs, np.sin(xs*freq+phase))).opts(
responsive=True, min_height=400)
@pn.depends(freq=freq, phase=phase)
def cosine(freq, phase):
return hv.Curve((xs, np.cos(xs*freq+phase))).opts(
responsive=True, min_height=400)
golden.sidebar.append(freq)
golden.sidebar.append(phase)
golden.main.append(
pn.Row(
pn.Card(hv.DynamicMap(sine), title='Sine'),
pn.Card(hv.DynamicMap(cosine), title='Cosine')
)
)
golden.main.append(
pn.Row(
pn.Card(hv.DynamicMap(sine), title='Sine'),
pn.Card(hv.DynamicMap(cosine), title='Cosine')
)
)
golden.servable();
###Output
_____no_output_____
###Markdown
For a large variety of use cases we do not need complete control over the exact layout of each individual component on the page, as could be achieved with a [custom template](../../user_guide/Templates.ipynb), we just want to achieve a more polished look and feel. For these cases Panel ships with a number of default templates, which are defined by declaring three main content areas on the page, which can be populated as desired:* **`header`**: The header area of the HTML page* **`sidebar`**: A collapsible sidebar* **`main`**: The main area of the application* **`modal`**: A modal area which can be opened and closed from PythonThese three areas behave very similarly to other Panel layout components and have list-like semantics. This means we can easily append new components into these areas. Unlike other layout components however, the contents of the areas is fixed once rendered. If you need a dynamic layout you should therefore insert a regular Panel layout component (e.g. a `Column` or `Row`) and modify it in place once added to one of the content areas. Templates can allow for us to quickly and easily create web apps for displaying our data. Panel comes with a default Template, and includes multiple Templates that extend the default which add some customization for a better display. Parameters:In addition to the four different areas we can populate the default templates also provide a few additional parameters:* **`busy_indicator`** (BooleanIndicator): Visual indicator of application busy state.* **`header_background`** (str): Optional header background color override.* **`header_color`** (str): Optional header text color override.* **`logo`** (str): URI of logo to add to the header (if local file, logo is base64 encoded as URI).* **`theme`** (Theme): A Theme class (available in `panel.template.theme`)* **`title`** (str): A title to show in the header.________ In this case we are using the `GoldenTemplate`, built using the Golden Layout CSS, which allows for the creation of tabs that can be moved around. Due to the movable tabs this Template is a little different than the others. The sidebar works similarly to the other templates, but to have your displays render in different tabs, we have to make separate calls to `.main.append()`. Here is an example of how you can set up a display using this template:
###Code
golden = pn.template.GoldenTemplate(title='Golden Template')
pn.config.sizing_mode = 'stretch_width'
xs = np.linspace(0, np.pi)
freq = pn.widgets.FloatSlider(name="Frequency", start=0, end=10, value=2)
phase = pn.widgets.FloatSlider(name="Phase", start=0, end=np.pi)
@pn.depends(freq=freq, phase=phase)
def sine(freq, phase):
return hv.Curve((xs, np.sin(xs*freq+phase))).opts(
responsive=True, min_height=400)
@pn.depends(freq=freq, phase=phase)
def cosine(freq, phase):
return hv.Curve((xs, np.cos(xs*freq+phase))).opts(
responsive=True, min_height=400)
golden.sidebar.append(freq)
golden.sidebar.append(phase)
golden.main.append(
pn.Row(
pn.Card(hv.DynamicMap(sine), title='Sine'),
pn.Card(hv.DynamicMap(cosine), title='Cosine')
)
)
golden.main.append(
pn.Row(
pn.Card(hv.DynamicMap(sine), title='Sine'),
pn.Card(hv.DynamicMap(cosine), title='Cosine')
)
)
golden.servable();
###Output
_____no_output_____
###Markdown
For a large variety of use cases we do not need complete control over the exact layout of each individual component on the page, as could be achieved with a [custom template](../../user_guide/Templates.ipynb), we just want to achieve a more polished look and feel. For these cases Panel ships with a number of default templates, which are defined by declaring four main content areas on the page, which can be populated as desired:* **`header`**: The header area of the HTML page* **`sidebar`**: A collapsible sidebar* **`main`**: The main area of the application* **`modal`**: A modal area which can be opened and closed from PythonThese four areas behave very similarly to other Panel layout components and have list-like semantics. This means we can easily append new components into these areas. Unlike other layout components however, the contents of the areas is fixed once rendered. If you need a dynamic layout you should therefore insert a regular Panel layout component (e.g. a `Column` or `Row`) and modify it in place once added to one of the content areas. Templates can allow for us to quickly and easily create web apps for displaying our data. Panel comes with a default Template, and includes multiple Templates that extend the default which add some customization for a better display. Parameters:In addition to the four different areas we can populate the default templates also provide a few additional parameters:* **`busy_indicator`** (BooleanIndicator): Visual indicator of application busy state.* **`header_background`** (str): Optional header background color override.* **`header_color`** (str): Optional header text color override.* **`logo`** (str): URI of logo to add to the header (if local file, logo is base64 encoded as URI).* **`site`** (str): Name of the site. Will be shown in the header. Default is '', i.e. not shown.* **`site_url`** (str): Url of the site and logo. Default is "/".* **`title`** (str): A title to show in the header.* **`theme`** (Theme): A Theme class (available in `panel.template.theme`)* **`sidebar_width`** (int): The width of the sidebar in percent. Default is 20.________ In this case we are using the `GoldenTemplate`, built using the [Golden Layout CSS](https://golden-layout.com/), which allows for the creation of tabs that can be moved around. Due to the movable tabs this Template is a little different than the others. The sidebar works similarly to the other templates, but to have your displays render in different tabs, we have to make separate calls to `.main.append()`. Here is an example of how you can set up a display using this template:
###Code
golden = pn.template.GoldenTemplate(title='Golden Template')
xs = np.linspace(0, np.pi)
freq = pn.widgets.FloatSlider(name="Frequency", start=0, end=10, value=2)
phase = pn.widgets.FloatSlider(name="Phase", start=0, end=np.pi)
@pn.depends(freq=freq, phase=phase)
def sine(freq, phase):
return hv.Curve((xs, np.sin(xs*freq+phase))).opts(
responsive=True, min_height=400)
@pn.depends(freq=freq, phase=phase)
def cosine(freq, phase):
return hv.Curve((xs, np.cos(xs*freq+phase))).opts(
responsive=True, min_height=400)
golden.sidebar.append(freq)
golden.sidebar.append(phase)
golden.main.append(
pn.Row(
pn.Card(hv.DynamicMap(sine), title='Sine'),
pn.Card(hv.DynamicMap(cosine), title='Cosine')
)
)
golden.main.append(
pn.Row(
pn.Card(hv.DynamicMap(sine), title='Sine'),
pn.Card(hv.DynamicMap(cosine), title='Cosine')
)
)
golden.servable();
###Output
_____no_output_____
###Markdown
For a large variety of use cases we do not need complete control over the exact layout of each individual component on the page, as could be achieved with a [custom template](../../user_guide/Templates.ipynb), we just want to achieve a more polished look and feel. For these cases Panel ships with a number of default templates, which are defined by declaring three main content areas on the page, which can be populated as desired:* **`header`**: The header area of the HTML page* **`sidebar`**: A collapsible sidebar* **`main`**: The main area of the application* **`modal`**: A modal area which can be opened and closed from PythonThese three areas behave very similarly to other Panel layout components and have list-like semantics. This means we can easily append new components into these areas. Unlike other layout components however, the contents of the areas is fixed once rendered. If you need a dynamic layout you should therefore insert a regular Panel layout component (e.g. a `Column` or `Row`) and modify it in place once added to one of the content areas. Templates can allow for us to quickly and easily create web apps for displaying our data. Panel comes with a default Template, and includes multiple Templates that extend the default which add some customization for a better display. Parameters:In addition to the four different areas we can populate the default templates also provide a few additional parameters:* **`busy_indicator`** (BooleanIndicator): Visual indicator of application busy state.* **`header_background`** (str): Optional header background color override.* **`header_color`** (str): Optional header text color override.* **`logo`** (str): URI of logo to add to the header (if local file, logo is base64 encoded as URI).* **`site`** (str): Name of the site. Will be shown in the header. Default is '', i.e. not shown.* **`site_url`** (str): Url of the site and logo. Default is "/".* **`title`** (str): A title to show in the header.* **`theme`** (Theme): A Theme class (available in `panel.template.theme`)________ In this case we are using the `GoldenTemplate`, built using the Golden Layout CSS, which allows for the creation of tabs that can be moved around. Due to the movable tabs this Template is a little different than the others. The sidebar works similarly to the other templates, but to have your displays render in different tabs, we have to make separate calls to `.main.append()`. Here is an example of how you can set up a display using this template:
###Code
golden = pn.template.GoldenTemplate(title='Golden Template')
pn.config.sizing_mode = 'stretch_width'
xs = np.linspace(0, np.pi)
freq = pn.widgets.FloatSlider(name="Frequency", start=0, end=10, value=2)
phase = pn.widgets.FloatSlider(name="Phase", start=0, end=np.pi)
@pn.depends(freq=freq, phase=phase)
def sine(freq, phase):
return hv.Curve((xs, np.sin(xs*freq+phase))).opts(
responsive=True, min_height=400)
@pn.depends(freq=freq, phase=phase)
def cosine(freq, phase):
return hv.Curve((xs, np.cos(xs*freq+phase))).opts(
responsive=True, min_height=400)
golden.sidebar.append(freq)
golden.sidebar.append(phase)
golden.main.append(
pn.Row(
pn.Card(hv.DynamicMap(sine), title='Sine'),
pn.Card(hv.DynamicMap(cosine), title='Cosine')
)
)
golden.main.append(
pn.Row(
pn.Card(hv.DynamicMap(sine), title='Sine'),
pn.Card(hv.DynamicMap(cosine), title='Cosine')
)
)
golden.servable();
###Output
_____no_output_____
###Markdown
For a large variety of use cases we do not need complete control over the exact layout of each individual component on the page, as could be achieved with a [custom template](../../user_guide/Templates.ipynb), we just want to achieve a more polished look and feel. For these cases Panel ships with a number of default templates, which are defined by declaring three main content areas on the page, which can be populated as desired:* **`header`**: The header area of the HTML page* **`sidebar`**: A collapsible sidebar* **`main`**: The main area of the application* **`modal`**: A modal area which can be opened and closed from PythonThese three areas behave very similarly to other Panel layout components and have list-like semantics. This means we can easily append new components into these areas. Unlike other layout components however, the contents of the areas is fixed once rendered. If you need a dynamic layout you should therefore insert a regular Panel layout component (e.g. a `Column` or `Row`) and modify it in place once added to one of the content areas. Templates can allow for us to quickly and easily create web apps for displaying our data. Panel comes with a default Template, and includes multiple Templates that extend the default which add some customization for a better display. Parameters:In addition to the four different areas we can populate the default templates also provide a few additional parameters:* **`busy_indicator`** (BooleanIndicator): Visual indicator of application busy state.* **`header_background`** (str): Optional header background color override.* **`header_color`** (str): Optional header text color override.* **`logo`** (str): URI of logo to add to the header (if local file, logo is base64 encoded as URI).* **`site`** (str): Name of the site. Will be shown in the header. Default is '', i.e. not shown.* **`site_url`** (str): Url of the site and logo. Default is "/".* **`title`** (str): A title to show in the header.* **`theme`** (Theme): A Theme class (available in `panel.template.theme`)* **`sidebar_width`** (int): The width of the sidebar in percent. Default is 20.________ In this case we are using the `GoldenTemplate`, built using the Golden Layout CSS, which allows for the creation of tabs that can be moved around. Due to the movable tabs this Template is a little different than the others. The sidebar works similarly to the other templates, but to have your displays render in different tabs, we have to make separate calls to `.main.append()`. Here is an example of how you can set up a display using this template:
###Code
golden = pn.template.GoldenTemplate(title='Golden Template')
xs = np.linspace(0, np.pi)
freq = pn.widgets.FloatSlider(name="Frequency", start=0, end=10, value=2)
phase = pn.widgets.FloatSlider(name="Phase", start=0, end=np.pi)
@pn.depends(freq=freq, phase=phase)
def sine(freq, phase):
return hv.Curve((xs, np.sin(xs*freq+phase))).opts(
responsive=True, min_height=400)
@pn.depends(freq=freq, phase=phase)
def cosine(freq, phase):
return hv.Curve((xs, np.cos(xs*freq+phase))).opts(
responsive=True, min_height=400)
golden.sidebar.append(freq)
golden.sidebar.append(phase)
golden.main.append(
pn.Row(
pn.Card(hv.DynamicMap(sine), title='Sine'),
pn.Card(hv.DynamicMap(cosine), title='Cosine')
)
)
golden.main.append(
pn.Row(
pn.Card(hv.DynamicMap(sine), title='Sine'),
pn.Card(hv.DynamicMap(cosine), title='Cosine')
)
)
golden.servable();
###Output
_____no_output_____ |
datapreprocess_fillna.ipynb | ###Markdown
###Code
import pandas as pd
!pwd
!ls -l ./auto-mpg_1.csv
df = pd.read_csv('./auto-mpg_1.csv',header=None)
df.info() # DataFrame 정보 확인
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 398 entries, 0 to 397
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 0 398 non-null float64
1 1 398 non-null int64
2 2 398 non-null float64
3 3 398 non-null object
4 4 398 non-null float64
5 5 398 non-null float64
6 6 398 non-null int64
7 7 398 non-null int64
8 8 398 non-null object
dtypes: float64(4), int64(3), object(2)
memory usage: 28.1+ KB
###Markdown

###Code
df.describe() # DataFrame 통계 정보
df[3].describe() # 3 컬럼에 대한 DataFrame 통계 정보
###Output
_____no_output_____
###Markdown

###Code
df[8].describe() # 3 컬럼에 대한 DataFrame 통계 정보
###Output
_____no_output_____
###Markdown
###Code
df[0].mean() # 0 컬럼 평균 데이터
df[0].std() # 0 컬럼의 std 값
df[0].count() # 0 컬럼의 수량
df[0].min() # 0 컬럼의 최소값
df[0].max() # 0 컬럼의 최대값
df.columns = ['mpg','cylinders','displacement','horsepower','weight',
'acceleration','model year','origin','name']
df
df.plot(x='weight',y='mpg',kind='scatter')
df.describe() , df.info()
df[['mpg','weight']].plot(kind='box')
###Output
_____no_output_____
###Markdown
###Code
import pandas as pd
import seaborn as sns
df = sns.load_dataset('titanic')
# df.info()
df.describe(include='all')
###Output
_____no_output_____
###Markdown
missing value: age, embarked, deck, embark_town
###Code
df['age'].fillna(29) #fillna는 NaN값을 채워넣어준다.
df.info()
df['deck'].value_counts()
df_na = df.dropna(axis=1)
df_na.info()
df_desk = df.dropna(subset=['deck'], how='any', axis='index')
df_desk.info()
df_age = df['age'].fillna(29)
type(df_age), df_age.shape
df['age'] = df.age
df.info()
df['deck'].value_counts()
df['deck'].fillna('B')
df['deck'] = df.deck
df.info()
df['embarked'].value_counts()
df['embarked'] =df['embarked'].fillna('C')
df.info()
df['embark_town'].value_counts()
df['embark_town'] = df['embark_town'].fillna('Cherbourg')
df.info()
###Output
_____no_output_____
###Markdown
###Code
import pandas as pd
pd.read_excel('./시도별전출입인구수.xlsx')
df = pd.read_excel('./시도별전출입인구수.xlsx')
# df.info()
# df.head(5)
# df.describe()
# df.fillna(method="ffill")
###Output
_____no_output_____
###Markdown
###Code
import pandas as pd
import seaborn as sns
df = sns.load_dataset('titanic')
# df.info()
# df.describe(include='all')
df.head(5)
###Output
_____no_output_____
###Markdown
missing value : age, embarked, deck, embark_town
###Code
df['age'].fillna(29)
df.info()
df_deck = df.dropna(subset=['deck'], how='any', axis='index')
df_deck.info()
df_age = df['age'].fillna(29)
type(df_age), df_age.shape
df['age'] = df_age
df.info()
df['deck'].value_counts()
df['deck'] = df['deck'].fillna('B')
df.info()
df['embarked'].value_counts()
df['embarked'] = df['embarked'].fillna('C')
df.info()
df['embark_town'].value_counts()
df['embark_town'] = df['embark_town'].fillna('Cherbourg')
df.info()
###Output
_____no_output_____
###Markdown
missing value : age, embarked, deck, embarked_town
###Code
df['age'].fillna(29)
df['deck'].value_counts()
df_na = df.dropna()
df_na.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 survived 891 non-null int64
1 pclass 891 non-null int64
2 sex 891 non-null object
3 sibsp 891 non-null int64
4 parch 891 non-null int64
5 fare 891 non-null float64
6 class 891 non-null category
7 who 891 non-null object
8 adult_male 891 non-null bool
9 alive 891 non-null object
10 alone 891 non-null bool
dtypes: bool(2), category(1), float64(1), int64(4), object(3)
memory usage: 58.5+ KB
###Markdown
null이 있는 행을 삭제
###Code
df_deck = df.dropna(subset=['deck'], how = 'any', axis='index')
df_deck.info()
df_age = df['age'].fillna(29)
type(df_age), df_age.shape
###Output
_____no_output_____
###Markdown
age column을 변경 타입이 object형으로
###Code
df['age'] = df_age
df.info()
df['deck'].value_count()
###Output
_____no_output_____
###Markdown
df의 deck에 NaN을 B로 채운 값으로 대체
###Code
df['deck'] = df['deck'].fillna('B')
df.info()
df['embarked'].value_counts()
df['embarked'] = df['embarked'].fillna('C')
df.info()
df['embark_town'].value_counts()
df['embark_town'] = df['embark_town'].fillna('Cherbourg ')
df.info()
###Output
_____no_output_____
###Markdown
###Code
import pandas as pd
import seaborn as sns
df = sns.load_dataset('titanic')
# df.info()
df.describe(include='all')
###Output
_____no_output_____
###Markdown
missing value : age, embarked, deck, embarked_town
###Code
df['age'].fillna(29)
df.info()
df['deck'].value_counts()
df_deck = df.dropna(subset=['deck'], how='any', axis=0)
df_deck.info()
df_age = df['age'].fillna(29)
type(df_age), df_age.shape
df['age'] = df_age
df.info()
df['deck'].value_counts()
df['deck'] = df['deck'].fillna('B')
df.info()
df['embarked'].value_counts()
df['embarked'] = df['embarked'].fillna('C')
df.info()
df['embark_town'].value_counts()
df['embark_town'] = df['embark_town'].fillna('Cherbourg')
df.info()
###Output
_____no_output_____
###Markdown
###Code
import pandas as pd
import seaborn as sns
df = sns.load_dataset('titanic')
#df.info()
df.describe(include='all')
###Output
_____no_output_____
###Markdown
missing value : age, embarked, deck, embark_town
###Code
df_na = df.dropna()
df_na
df_na.info()
df_deck = df.dropna(subset=['deck'], how='any', axis='index')
df_deck.info()
df_age = df['age'].fillna(29)
type(df_age)
df['deck'].value_counts()
df['deck'].fillna('B')
df['embarked'].value_counts()
df['embarked'] = df['embarked'].fillna('C')
df.info()
df['embark_town'].value_counts()
df['embark_town'] = df['embark_town'].fillna('Cherbourg')
df.info()
###Output
_____no_output_____
###Markdown
###Code
import pandas as pd
import seaborn as sns
df = sns.load_dataset('titanic')
df.info()
df.describe(include='all')
df['age'].fillna(29)
###Output
_____no_output_____
###Markdown
missing value(결측치) : age, embarked, deck, embark_townage는 평균값을 넣어줄 예정
###Code
df_deck = df.dropna(subset=['deck'], how='any', axis='index')
df_deck.info()
df_age = df['age'].fillna(29)
type(df_age), df_age.shape
df['age'] = df_age
df.info()
df['deck'].value_counts()
df['deck']= df['deck'].fillna('B')
df.info()
df['embarked'].value_counts()
df['embarked'] = df['embarked'].fillna('C')
df.info()
df['embark_town'].value_counts()
df['embark_town'] = df['embark_town'].fillna('Cherbourg')
df.info()
###Output
_____no_output_____ |
deeplearning1/nbs-custom-mine/lesson3_03_imagenet_batchnorm.ipynb | ###Markdown
This notebook explains how to add batch normalization to VGG. The code shown here is implemented in [vgg_bn.py](https://github.com/fastai/courses/blob/master/deeplearning1/nbs/vgg16bn.py), and there is a version of ``vgg_ft`` (our fine tuning function) with batch norm called ``vgg_ft_bn`` in [utils.py](https://github.com/fastai/courses/blob/master/deeplearning1/nbs/utils.py).
###Code
from theano.sandbox import cuda
%matplotlib inline
import utils; reload(utils)
from utils import *
from __future__ import print_function, division
###Output
Using Theano backend.
###Markdown
The problem, and the solution The problem The problem that we faced in the lesson 3 is that when we wanted to add batch normalization, we initialized *all* the dense layers of the model to random weights, and then tried to train them with our cats v dogs dataset. But that's a lot of weights to initialize to random - out of 134m params, around 119m are in the dense layers! Take a moment to think about why this is, and convince yourself that dense layers are where most of the weights will be. Also, think about whether this implies that most of the *time* will be spent training these weights. What do you think?Trying to train 120m params using just 23k images is clearly an unreasonable expectation. The reason we haven't had this problem before is that the dense layers were not random, but were trained to recognize imagenet categories (other than the very last layer, which only has 8194 params). The solution The solution, obviously enough, is to add batch normalization to the VGG model! To do so, we have to be careful - we can't just insert batchnorm layers, since their parameters (*gamma* - which is used to multiply by each activation, and *beta* - which is used to add to each activation) will not be set correctly. Without setting these correctly, the new batchnorm layers will normalize the previous layer's activations, meaning that the next layer will receive totally different activations to what it would have without new batchnorm layer. And that means that all the pre-trained weights are no longer of any use!So instead, we need to figure out what beta and gamma to choose when we insert the layers. The answer to this turns out to be pretty simple - we need to calculate what the mean and standard deviation of that activations for that layer are when calculated on all of imagenet, and then set beta and gamma to these values. That means that the new batchnorm layer will normalize the data with the mean and standard deviation, and then immediately un-normalize the data using the beta and gamma parameters we provide. So the output of the batchnorm layer will be identical to it's input - which means that all the pre-trained weights will continue to work just as well as before.The benefit of this is that when we wish to fine-tune our own networks, we will have all the benefits of batch normalization (higher learning rates, more resiliant training, and less need for dropout) plus all the benefits of a pre-trained network. To calculate the mean and standard deviation of the activations on imagenet, we need to download imagenet. You can download imagenet from http://www.image-net.org/download-images . The file you want is the one titled **Download links to ILSVRC2013 image data**. You'll need to request access from the imagenet admins for this, although it seems to be an automated system - I've always found that access is provided instantly. Once you're logged in and have gone to that page, look for the **CLS-LOC dataset** section. Both training and validation images are available, and you should download both. There's not much reason to download the test images, however.Note that this will not be the entire imagenet archive, but just the 1000 categories that are used in the annual competition. Since that's what VGG16 was originally trained on, that seems like a good choice - especially since the full dataset is 1.1 terabytes, whereas the 1000 category dataset is 138 gigabytes. Adding batchnorm to Imagenet Setup Sample As per usual, we create a sample so we can experiment more rapidly.
###Code
%pushd data/imagenet
%cd train
%mkdir ../sample
%mkdir ../sample/train
%mkdir ../sample/valid
from shutil import copyfile
g = glob('*')
for d in g:
os.mkdir('../sample/train/'+d)
os.mkdir('../sample/valid/'+d)
g = glob('*/*.JPEG')
shuf = np.random.permutation(g)
for i in range(25000): copyfile(shuf[i], '../sample/train/' + shuf[i])
%cd ../valid
g = glob('*/*.JPEG')
shuf = np.random.permutation(g)
for i in range(5000): copyfile(shuf[i], '../sample/valid/' + shuf[i])
%cd ..
%mkdir sample/results
%popd
###Output
_____no_output_____
###Markdown
Data setup We set up our paths, data, and labels in the usual way. Note that we don't try to read all of Imagenet into memory! We only load the sample into memory.
###Code
sample_path = 'data/jhoward/imagenet/sample/'
# This is the path to my fast SSD - I put datasets there when I can to get the speed benefit
fast_path = '/home/jhoward/ILSVRC2012_img_proc/'
#path = '/data/jhoward/imagenet/sample/'
path = 'data/jhoward/imagenet/'
batch_size=64
samp_trn = get_data(path+'train')
samp_val = get_data(path+'valid')
save_array(samp_path+'results/trn.dat', samp_trn)
save_array(samp_path+'results/val.dat', samp_val)
samp_trn = load_array(sample_path+'results/trn.dat')
samp_val = load_array(sample_path+'results/val.dat')
(val_classes, trn_classes, val_labels, trn_labels,
val_filenames, filenames, test_filenames) = get_classes(path)
(samp_val_classes, samp_trn_classes, samp_val_labels, samp_trn_labels,
samp_val_filenames, samp_filenames, samp_test_filenames) = get_classes(sample_path)
###Output
Found 25000 images belonging to 1000 classes.
Found 5000 images belonging to 1000 classes.
Found 0 images belonging to 0 classes.
###Markdown
Model setup Since we're just working with the dense layers, we should pre-compute the output of the convolutional layers.
###Code
vgg = Vgg16()
model = vgg.model
layers = model.layers
last_conv_idx = [index for index,layer in enumerate(layers)
if type(layer) is Convolution2D][-1]
conv_layers = layers[:last_conv_idx+1]
dense_layers = layers[last_conv_idx+1:]
conv_model = Sequential(conv_layers)
samp_conv_val_feat = conv_model.predict(samp_val, batch_size=batch_size*2)
samp_conv_feat = conv_model.predict(samp_trn, batch_size=batch_size*2)
save_array(sample_path+'results/conv_val_feat.dat', samp_conv_val_feat)
save_array(sample_path+'results/conv_feat.dat', samp_conv_feat)
samp_conv_feat = load_array(sample_path+'results/conv_feat.dat')
samp_conv_val_feat = load_array(sample_path+'results/conv_val_feat.dat')
samp_conv_val_feat.shape
###Output
_____no_output_____
###Markdown
This is our usual Vgg network just covering the dense layers:
###Code
def get_dense_layers():
return [
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dense(4096, activation='relu'),
Dropout(0.5),
Dense(4096, activation='relu'),
Dropout(0.5),
Dense(1000, activation='softmax')
]
dense_model = Sequential(get_dense_layers())
for l1, l2 in zip(dense_layers, dense_model.layers):
l2.set_weights(l1.get_weights())
###Output
_____no_output_____
###Markdown
Check model It's a good idea to check that your models are giving reasonable answers, before using them.
###Code
dense_model.compile(Adam(), 'categorical_crossentropy', ['accuracy'])
dense_model.evaluate(samp_conv_val_feat, samp_val_labels)
model.compile(Adam(), 'categorical_crossentropy', ['accuracy'])
# should be identical to above
model.evaluate(val, val_labels)
# should be a little better than above, since VGG authors overfit
dense_model.evaluate(conv_feat, trn_labels)
###Output
24992/25000 [============================>.] - ETA: 0s
###Markdown
Adding our new layers Calculating batchnorm params To calculate the output of a layer in a Keras sequential model, we have to create a function that defines the input layer and the output layer, like this:
###Code
k_layer_out = K.function([dense_model.layers[0].input, K.learning_phase()],
[dense_model.layers[2].output])
###Output
_____no_output_____
###Markdown
Then we can call the function to get our layer activations:
###Code
d0_out = k_layer_out([samp_conv_val_feat, 0])[0]
k_layer_out = K.function([dense_model.layers[0].input, K.learning_phase()],
[dense_model.layers[4].output])
d2_out = k_layer_out([samp_conv_val_feat, 0])[0]
###Output
_____no_output_____
###Markdown
Now that we've got our activations, we can calculate the mean and standard deviation for each (note that due to a bug in keras, it's actually the variance that we'll need).
###Code
mu0,var0 = d0_out.mean(axis=0), d0_out.var(axis=0)
mu2,var2 = d2_out.mean(axis=0), d2_out.var(axis=0)
###Output
_____no_output_____
###Markdown
Creating batchnorm model Now we're ready to create and insert our layers just after each dense layer.
###Code
nl1 = BatchNormalization()
nl2 = BatchNormalization()
bn_model = insert_layer(dense_model, nl2, 5)
bn_model = insert_layer(bn_model, nl1, 3)
bnl1 = bn_model.layers[3]
bnl4 = bn_model.layers[6]
###Output
_____no_output_____
###Markdown
After inserting the layers, we can set their weights to the variance and mean we just calculated.
###Code
bnl1.set_weights([var0, mu0, mu0, var0])
bnl4.set_weights([var2, mu2, mu2, var2])
bn_model.compile(Adam(1e-5), 'categorical_crossentropy', ['accuracy'])
###Output
_____no_output_____
###Markdown
We should find that the new model gives identical results to those provided by the original VGG model.
###Code
bn_model.evaluate(samp_conv_val_feat, samp_val_labels)
bn_model.evaluate(samp_conv_feat, samp_trn_labels)
###Output
24992/25000 [============================>.] - ETA: 0s
###Markdown
Optional - additional fine-tuning Now that we have a VGG model with batchnorm, we might expect that the optimal weights would be a little different to what they were when originally created without batchnorm. So we fine tune the weights for one epoch.
###Code
feat_bc = bcolz.open(fast_path+'trn_features.dat')
labels = load_array(fast_path+'trn_labels.dat')
val_feat_bc = bcolz.open(fast_path+'val_features.dat')
val_labels = load_array(fast_path+'val_labels.dat')
bn_model.fit(feat_bc, labels, nb_epoch=1, batch_size=batch_size,
validation_data=(val_feat_bc, val_labels))
###Output
Train on 2522348 samples, validate on 98200 samples
Epoch 1/1
2522348/2522348 [==============================] - 2521s - loss: 1.0574 - acc: 0.7191 - val_loss: 1.3572 - val_acc: 0.6720
###Markdown
The results look quite encouraging! Note that these VGG weights are now specific to how keras handles image scaling - that is, it squashes and stretches images, rather than adding black borders. So this model is best used on images created in that way.
###Code
bn_model.save_weights(path+'models/bn_model2.h5')
bn_model.load_weights(path+'models/bn_model2.h5')
###Output
_____no_output_____
###Markdown
Create combined model Our last step is simply to copy our new dense layers on to the end of the convolutional part of the network, and save the new complete set of weights, so we can use them in the future when using VGG. (Of course, we'll also need to update our VGG architecture to add the batchnorm layers).
###Code
new_layers = copy_layers(bn_model.layers)
for layer in new_layers:
conv_model.add(layer)
copy_weights(bn_model.layers, new_layers)
conv_model.compile(Adam(1e-5), 'categorical_crossentropy', ['accuracy'])
conv_model.evaluate(samp_val, samp_val_labels)
conv_model.save_weights(path+'models/inet_224squash_bn.h5')
###Output
_____no_output_____ |
experiments/2020_06_22 designing filters.ipynb | ###Markdown
Resolving people
###Code
df_people = df[df['GENDER'].isin(['M', 'F'])].copy()
df_people_small = df_people.head(600)
f = Filter(df_people, "res_WIKIDATA_IDs")
f.add_property_filter("P31", 'Q5') # human
f.add_label_filter("PREFERRED_NAME", threshold=90, include_aliases=True)
f.view_filters()
f.process_dataframe()
df_new = f.get_dataframe()
###Output
_____no_output_____ |
dmu1/dmu1_ml_Herschel-Stripe-82/1.8.1_DECaLS.ipynb | ###Markdown
We use magnitudes between 16.0 and 17.5.
###Code
# Aperture correction
mag_corr['z'] = np.nan
mag_corr['z'], num, std = aperture_correction(
magnitudes['z'][4], magnitudes['z'][4],
stellarities['z'],
mag_min=15.0, mag_max=17.0)
print("Aperture correction for z band:")
print("Correction: {}".format(mag_corr['z']))
print("Number of source used: {}".format(num))
print("RMS: {}".format(std))
###Output
Aperture correction for z band:
Correction: 0.0
Number of source used: 176285
RMS: 0.0
###Markdown
I.f - Y band
###Code
nb_plot_mag_ap_evol(magnitudes['y'], stellarities['y'], labels=apertures)
nb_plot_mag_vs_apcor(magnitudes['y'][4],
magnitudes['y'][4],
stellarities['y'])
# Aperture correction
mag_corr['y'] = np.nan
#mag_corr['y'], num, std = aperture_correction(
# magnitudes['y'][4], magnitudes['y'][5],
# stellarities['y'],
# mag_min=16.0, mag_max=17.5)
#print("Aperture correction for y band:")
#print("Correction: {}".format(mag_corr['y']))
#print("Number of source used: {}".format(num))
#print("RMS: {}".format(std))
###Output
_____no_output_____
###Markdown
II - StellarityLegacy Survey does not provide a 0 to 1 stellarity so we replace items flagged as PSF accpording to the following table:\begin{equation*}P(star) = \frac{ \prod_{i} P(star)_i }{ \prod_{i} P(star)_i + \prod_{i} P(galaxy)_i }\end{equation*}where $i$ is the band, and with using the same probabilities as UKDISS:| HSC flag | UKIDSS flag | Meaning | P(star) | P(galaxy) | P(noise) | P(saturated) ||:--------:|:-----------:|:----------------|--------:|----------:|---------:|-------------:|| | -9 | Saturated | 0.0 | 0.0 | 5.0 | 95.0 || | -3 | Probable galaxy | 25.0 | 70.0 | 5.0 | 0.0 || | -2 | Probable star | 70.0 | 25.0 | 5.0 | 0.0 || 0 | -1 | Star | 90.0 | 5.0 | 5.0 | 0.0 || | 0 | Noise | 5.0 | 5.0 | 90.0 | 0.0 || 1 | +1 | Galaxy | 5.0 | 90.0 | 5.0 | 0.0 |
###Code
stellarities['g'][np.isclose(stellarities['g'], 1.)] = 0.9
stellarities['g'][np.isclose(stellarities['g'], 0.)] = 0.05
###Output
_____no_output_____
###Markdown
II - Column selection
###Code
imported_columns = OrderedDict({
"objid": "decals_id",
"brickid": "brickid",
"ra": "decals_ra",
"dec": "decals_dec",
"decam_flux": "decam_flux_TEMP",
"decam_flux_ivar": "decam_flux_ivar_TEMP",
"decam_apflux": "decam_apflux_TEMP",
"decam_apflux_ivar": "decam_apflux_ivar_TEMP",
})
catalogue = Table.read("../../dmu0/dmu0_DECaLS/data/DECaLS_Herschel-Stripe-82.fits")[list(imported_columns)]
for column in imported_columns:
catalogue[column].name = imported_columns[column]
catalogue["decals_id"] = 100000*catalogue["brickid"].astype(np.int64) + catalogue["decals_id"].astype(np.int64)
catalogue.remove_columns("brickid")
epoch = 2017
#catalogue.add_column(Column(catalogue["decam_flux_TEMP"][:,0], name="f_decam_u"))
catalogue.add_column(Column(catalogue["decam_flux_TEMP"][:,1], name="f_decam_g"))
catalogue.add_column(Column(catalogue["decam_flux_TEMP"][:,2], name="f_decam_r"))
#catalogue.add_column(Column(catalogue["decam_flux_TEMP"][:,3], name="f_decam_i"))
catalogue.add_column(Column(catalogue["decam_flux_TEMP"][:,4], name="f_decam_z"))
#catalogue.add_column(Column(catalogue["decam_flux_TEMP"][:,5], name="f_decam_y"))
#catalogue.add_column(Column(catalogue["decam_flux_ivar_TEMP"][:,0], name="ferr_decam_u"))
catalogue.add_column(Column(catalogue["decam_flux_ivar_TEMP"][:,1], name="ferr_decam_g"))
catalogue.add_column(Column(catalogue["decam_flux_ivar_TEMP"][:,2], name="ferr_decam_r"))
#catalogue.add_column(Column(catalogue["decam_flux_ivar_TEMP"][:,3], name="ferr_decam_i"))
catalogue.add_column(Column(catalogue["decam_flux_ivar_TEMP"][:,4], name="ferr_decam_z"))
#catalogue.add_column(Column(catalogue["decam_flux_ivar_TEMP"][:,5], name="ferr_decam_y"))
#For the aperture fluxes, there are 8 (0-7), we take 4 (2.0")
#DECam aperture fluxes on the co-added images in apertures of radius [0.5,0.75,1.0,1.5,2.0,3.5,5.0,7.0] arcsec in ugrizY
#catalogue.add_column(Column(catalogue["decam_apflux_TEMP"][:,0], name="f_ap_decam_u")[:,4])
catalogue.add_column(Column(catalogue["decam_apflux_TEMP"][:,1], name="f_ap_decam_g")[:,4])
catalogue.add_column(Column(catalogue["decam_apflux_TEMP"][:,2], name="f_ap_decam_r")[:,4])
#catalogue.add_column(Column(catalogue["decam_apflux_TEMP"][:,3], name="f_ap_decam_i")[:,4])
catalogue.add_column(Column(catalogue["decam_apflux_TEMP"][:,4], name="f_ap_decam_z")[:,4])
#catalogue.add_column(Column(catalogue["decam_apflux_TEMP"][:,5], name="f_ap_decam_y")[:,4])
#catalogue.add_column(Column(catalogue["decam_apflux_ivar_TEMP"][:,0], name="ferr_ap_decam_u")[:,4])
catalogue.add_column(Column(catalogue["decam_apflux_ivar_TEMP"][:,1], name="ferr_ap_decam_g")[:,4])
catalogue.add_column(Column(catalogue["decam_apflux_ivar_TEMP"][:,2], name="ferr_ap_decam_r")[:,4])
#catalogue.add_column(Column(catalogue["decam_apflux_ivar_TEMP"][:,3], name="ferr_ap_decam_i")[:,4])
catalogue.add_column(Column(catalogue["decam_apflux_ivar_TEMP"][:,4], name="ferr_ap_decam_z")[:,4])
#catalogue.add_column(Column(catalogue["decam_apflux_ivar_TEMP"][:,5], name="ferr_ap_decam_y")[:,4])
catalogue.remove_columns(["decam_flux_TEMP",
"decam_flux_ivar_TEMP",
"decam_apflux_TEMP",
"decam_apflux_ivar_TEMP"])
# Clean table metadata
catalogue.meta = None
flux_to_mag_vect = np.vectorize(flux_to_mag)
# Adding flux and band-flag columns
for col in catalogue.colnames:
catalogue[col].unit = None
if col.startswith('f_'):
#Replace 0 flux with NaN and
catalogue[col][catalogue[col] == 0.0] = np.nan
#Replace 1/sigma^2 with sigma
errcol = "ferr{}".format(col[1:])
catalogue[errcol][catalogue[errcol] == 0.0] = np.nan
catalogue[errcol] = np.sqrt(1/np.array(catalogue[errcol]))
#catalogue[errcol][catalogue[errcol] == None] = np.nan
#Replace nanomaggies with uJy
#a nanomaggy is approximately 3.631×10-6 Jy - http://www.sdss3.org/dr8/algorithms/magnitudes.php#nmgy
catalogue[col] = catalogue[col] * 3.631
catalogue[errcol] = catalogue[errcol] * 3.631
#Compute magnitudes and errors in magnitudes. This function expects Jy so must multiply uJy by 1.e-6
mag, error = flux_to_mag(np.array(catalogue[col])* 1.e-6, np.array(catalogue[errcol])* 1.e-6)
if 'ap' in col:
mag += mag_corr[col[-1]]
catalogue[col],catalogue[errcol] = mag_to_flux(mag,error)
catalogue.add_column(Column(mag, name="m{}".format(col[1:])))
catalogue.add_column(Column(error, name="m{}".format(errcol[1:])))
# Band-flag column
if 'ap' not in col:
catalogue.add_column(Column(np.zeros(len(catalogue), dtype=bool), name="flag{}".format(col[1:])))
#remove units from table
for col in catalogue.colnames:
catalogue[col].unit = None
catalogue.add_column(Column(data=stellarities['g'], name="decals_stellarity")) #Stellarites computed earlier
catalogue[:10].show_in_notebook()
###Output
_____no_output_____
###Markdown
III - Removal of duplicated sources We remove duplicated objects from the input catalogues.
###Code
SORT_COLS = [#'merr_ap_decam_u',
'merr_ap_decam_g',
'merr_ap_decam_r',
#'merr_ap_decam_i',
'merr_ap_decam_z',
#'merr_ap_decam_y'
]
FLAG_NAME = 'decals_flag_cleaned'
nb_orig_sources = len(catalogue)
catalogue = remove_duplicates(
catalogue, RA_COL, DEC_COL,
sort_col= SORT_COLS,
flag_name=FLAG_NAME)
nb_sources = len(catalogue)
print("The initial catalogue had {} sources.".format(nb_orig_sources))
print("The cleaned catalogue has {} sources ({} removed).".format(nb_sources, nb_orig_sources - nb_sources))
print("The cleaned catalogue has {} sources flagged as having been cleaned".format(np.sum(catalogue[FLAG_NAME])))
###Output
/opt/anaconda3/envs/herschelhelp_internal/lib/python3.6/site-packages/astropy/table/column.py:1096: MaskedArrayFutureWarning: setting an item on a masked array which has a shared mask will not copy the mask and also change the original mask array in the future.
Check the NumPy 1.11 release notes for more information.
ma.MaskedArray.__setitem__(self, index, value)
###Markdown
III - Astrometry correctionWe match the astrometry to the Gaia one. We limit the Gaia catalogue to sources with a g band flux between the 30th and the 70th percentile. Some quick tests show that this give the lower dispersion in the results.
###Code
gaia = Table.read("../../dmu0/dmu0_GAIA/data/GAIA_Herschel-Stripe-82.fits")
gaia_coords = SkyCoord(gaia['ra'], gaia['dec'])
nb_astcor_diag_plot(catalogue[RA_COL], catalogue[DEC_COL],
gaia_coords.ra, gaia_coords.dec, near_ra0=True)
delta_ra, delta_dec = astrometric_correction(
SkyCoord(catalogue[RA_COL], catalogue[DEC_COL]),
gaia_coords
)
print("RA correction: {}".format(delta_ra))
print("Dec correction: {}".format(delta_dec))
catalogue[RA_COL] = catalogue[RA_COL] + delta_ra.to(u.deg)
catalogue[DEC_COL] = catalogue[DEC_COL] + delta_dec.to(u.deg)
catalogue[RA_COL].unit = u.deg
catalogue[DEC_COL].unit = u.deg
nb_astcor_diag_plot(catalogue[RA_COL], catalogue[DEC_COL],
gaia_coords.ra, gaia_coords.dec, near_ra0=True)
###Output
_____no_output_____
###Markdown
IV - Flagging Gaia objects
###Code
catalogue.add_column(
gaia_flag_column(SkyCoord(catalogue[RA_COL], catalogue[DEC_COL]), epoch, gaia)
)
GAIA_FLAG_NAME = "decals_flag_gaia"
catalogue['flag_gaia'].name = GAIA_FLAG_NAME
print("{} sources flagged.".format(np.sum(catalogue[GAIA_FLAG_NAME] > 0)))
###Output
717665 sources flagged.
###Markdown
V - Saving to disk
###Code
catalogue.write("{}/DECaLS.fits".format(OUT_DIR), overwrite=True)
###Output
_____no_output_____
###Markdown
Herschel Stripe 82 master catalogue Preparation of DECam Legacy Survey dataThis catalogue comes from `dmu0_DECaLS`.In the catalogue, we keep:- The `object_id` as unique object identifier;- The position;- The u, g, r, i, z, Y aperture magnitude (2”);- The u, g, r, i, z, Y kron fluxes and magnitudes.We check for all ugrizY then only take bands for which there are measurements
###Code
from herschelhelp_internal import git_version
print("This notebook was run with herschelhelp_internal version: \n{}".format(git_version()))
%matplotlib inline
#%config InlineBackend.figure_format = 'svg'
import matplotlib.pyplot as plt
plt.rc('figure', figsize=(10, 6))
plt.style.use('ggplot')
from collections import OrderedDict
import os
from astropy import units as u
from astropy import visualization as vis
from astropy.coordinates import SkyCoord
from astropy.table import Column, Table
import numpy as np
from herschelhelp_internal.flagging import gaia_flag_column
from herschelhelp_internal.masterlist import nb_astcor_diag_plot, nb_plot_mag_ap_evol, \
nb_plot_mag_vs_apcor, remove_duplicates
from herschelhelp_internal.utils import astrometric_correction, mag_to_flux, aperture_correction, flux_to_mag
OUT_DIR = os.environ.get('TMP_DIR', "./data_tmp")
try:
os.makedirs(OUT_DIR)
except FileExistsError:
pass
RA_COL = "decals_ra"
DEC_COL = "decals_dec"
# Pritine LS catalogue
orig_decals = Table.read("../../dmu0/dmu0_DECaLS/data/DECaLS_Herschel-Stripe-82.fits")
###Output
WARNING: UnitsWarning: '1/deg^2' did not parse as fits unit: Numeric factor not supported by FITS [astropy.units.core]
WARNING: UnitsWarning: 'nanomaggy' did not parse as fits unit: At col 0, Unit 'nanomaggy' not supported by the FITS standard. [astropy.units.core]
WARNING: UnitsWarning: '1/nanomaggy^2' did not parse as fits unit: Numeric factor not supported by FITS [astropy.units.core]
WARNING: UnitsWarning: '1/arcsec^2' did not parse as fits unit: Numeric factor not supported by FITS [astropy.units.core]
###Markdown
I - Aperture correctionTo compute aperture correction we need to dertermine two parametres: the target aperture and the range of magnitudes for the stars that will be used to compute the correction.**Target aperture**: To determine the target aperture, we simulate a curve of growth using the provided apertures and draw two figures:- The evolution of the magnitudes of the objects by plotting on the same plot aperture number vs the mean magnitude.- The mean gain (loss when negative) of magnitude is each aperture compared to the previous (except for the first of course).As target aperture, we should use the smallest (i.e. less noisy) aperture for which most of the flux is captures.**Magnitude range**: To know what limits in aperture to use when doing the aperture correction, we plot for each magnitude bin the correction that is computed and its RMS. We should then use the wide limits (to use more stars) where the correction is stable and with few dispersion.
###Code
bands = ["u", "g", "r", "i", "z", "y"]
band_index = {"u":0,"g":1, "r":2, "i":3, "z":4, "y":5}
apertures = [0, 1, 2, 3, 4, 5, 6, 7]
aperture_sizes = [0.5, 0.75, 1.0, 1.5, 2.0, 3.5, 5.0, 7.0] #arcsec aperture sizes
flux = {}
flux_errors ={}
magnitudes = {}
flux_errors ={}
magnitude_errors = {}
stellarities = {}
flux_to_mag_vect = np.vectorize(flux_to_mag)
for band in bands:
flux[band] = np.transpose(np.array(orig_decals["decam_apflux"][:,band_index[band]])) #np.transpose(np.array( orig_decals["decam_apflux"], dtype=np.float ))
flux_errors[band] = np.transpose(np.array(orig_decals["decam_apflux_ivar"][:,band_index[band]])) #np.transpose(np.array( orig_legacy["apflux_ivar_{}".format(band)], dtype=np.float ))
magnitudes[band], magnitude_errors[band] = flux_to_mag_vect(flux[band] * 3.631e-6 ,flux_errors[band] * 3.631e-6)
stellarities[band] = np.full(len(orig_decals),0., dtype='float32')
stellarities[band][np.array( orig_decals["type"]) == "PSF " ] = 1.
stellarities[band][np.array( orig_decals["type"]) == "PSF" ] = 1.
# Some sources have an infinite magnitude
mask = np.isinf(magnitudes[band])
magnitudes[band][mask] = np.nan
magnitude_errors[band][mask] = np.nan
mag_corr = {}
###Output
/opt/herschelhelp_internal/herschelhelp_internal/utils.py:76: RuntimeWarning: divide by zero encountered in log10
magnitudes = 2.5 * (23 - np.log10(fluxes)) - 48.6
/opt/herschelhelp_internal/herschelhelp_internal/utils.py:80: RuntimeWarning: invalid value encountered in double_scalars
errors = 2.5 / np.log(10) * errors_on_fluxes / fluxes
/opt/herschelhelp_internal/herschelhelp_internal/utils.py:76: RuntimeWarning: invalid value encountered in log10
magnitudes = 2.5 * (23 - np.log10(fluxes)) - 48.6
###Markdown
1.a u band
###Code
nb_plot_mag_ap_evol(magnitudes['u'], stellarities['u'], labels=apertures)
###Output
/opt/anaconda3/envs/herschelhelp_internal/lib/python3.6/site-packages/numpy/lib/nanfunctions.py:703: RuntimeWarning: Mean of empty slice
warnings.warn("Mean of empty slice", RuntimeWarning)
/opt/anaconda3/envs/herschelhelp_internal/lib/python3.6/site-packages/numpy/lib/nanfunctions.py:703: RuntimeWarning: Mean of empty slice
warnings.warn("Mean of empty slice", RuntimeWarning)
###Markdown
u band is all nan
###Code
nb_plot_mag_vs_apcor(magnitudes['u'][4],
magnitudes['u'][5],
stellarities['u'])
# Aperture correction
mag_corr['u'] = np.nan
#mag_corr['u'], num, std = aperture_correction(
# magnitudes['u'][4], magnitudes['u'][5],
# stellarities['u'],
# mag_min=16.0, mag_max=19.0)
#print("Aperture correction for g band:")
#print("Correction: {}".format(mag_corr['g']))
#print("Number of source used: {}".format(num))
#print("RMS: {}".format(std))
###Output
_____no_output_____
###Markdown
I.a - g band
###Code
nb_plot_mag_ap_evol(magnitudes['g'], stellarities['g'], labels=apertures)
###Output
_____no_output_____
###Markdown
We will use aperture 5 as target.
###Code
nb_plot_mag_vs_apcor(magnitudes['g'][4],
magnitudes['g'][5],
stellarities['g'])
###Output
_____no_output_____
###Markdown
We will use magnitudes between 16.0 and 19.0
###Code
# Aperture correction
mag_corr['g'], num, std = aperture_correction(
magnitudes['g'][4], magnitudes['g'][5],
stellarities['g'],
mag_min=16.0, mag_max=19.0)
print("Aperture correction for g band:")
print("Correction: {}".format(mag_corr['g']))
print("Number of source used: {}".format(num))
print("RMS: {}".format(std))
###Output
Aperture correction for g band:
Correction: -0.0911514235846056
Number of source used: 151015
RMS: 0.02364389650630337
###Markdown
I.b - r band
###Code
nb_plot_mag_ap_evol(magnitudes['r'], stellarities['r'], labels=apertures)
###Output
_____no_output_____
###Markdown
We will use aperture 5 as target.
###Code
nb_plot_mag_vs_apcor(magnitudes['r'][4],
magnitudes['r'][5],
stellarities['r'])
###Output
_____no_output_____
###Markdown
We use magnitudes between 16.0 and 18.0.
###Code
# Aperture correction
mag_corr['r'], num, std = aperture_correction(
magnitudes['r'][4], magnitudes['r'][5],
stellarities['r'],
mag_min=16.0, mag_max=18.0)
print("Aperture correction for r band:")
print("Correction: {}".format(mag_corr['r']))
print("Number of source used: {}".format(num))
print("RMS: {}".format(std))
###Output
Aperture correction for r band:
Correction: -0.0465021447682048
Number of source used: 149159
RMS: 0.013977600173198289
###Markdown
I.d - i band
###Code
nb_plot_mag_ap_evol(magnitudes['i'], stellarities['i'], labels=apertures)
nb_plot_mag_vs_apcor(magnitudes['i'][4],
magnitudes['i'][4],
stellarities['i'])
# Aperture correction
mag_corr['i'] = np.nan
#mag_corr['i'], num, std = aperture_correction(
# magnitudes['i'][4], magnitudes['i'][5],
# stellarities['i'],
# mag_min=16.0, mag_max=17.5)
#print("Aperture correction for i band:")
#print("Correction: {}".format(mag_corr['i']))
#print("Number of source used: {}".format(num))
#print("RMS: {}".format(std))
###Output
_____no_output_____
###Markdown
I.e - z band
###Code
nb_plot_mag_ap_evol(magnitudes['z'], stellarities['z'], labels=apertures)
###Output
_____no_output_____
###Markdown
We will use aperture 4 as target.
###Code
nb_plot_mag_vs_apcor(magnitudes['z'][4],
magnitudes['z'][4],
stellarities['z'])
###Output
_____no_output_____ |
Collective Sampling and Search.ipynb | ###Markdown
Experiment: Collective Sampling and SearchAn underwater robot collective is deployed to search and rescue a star in distress.
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = [12, 8]
import math
import numpy as np
from interaction import Interaction
from environment import Environment
from fish import Fish
from channel import Channel
from observer import Observer
from utils import generate_distortion, generate_fish, run_simulation
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
Hear the signal of the star and run back to the deployment station to report itRobots disperse until the first one hears the signal of the star. That robot broadcasts the info that it has detected the star. Thereupon, all robots switch to aggregation and return to the deployment station.
###Code
"""Hear star, gather at origin
"""
from events import InfoExternal
run_time = 30 # in seconds
num_fish = 15
arena_size = 30
arena_center = arena_size / 2.0
initial_spread = 1
fish_pos = initial_spread * np.random.rand(num_fish, 2) + arena_center - initial_spread / 2.0
clock_freqs = 1
verbose = False
distortion = generate_distortion(type='none', n=arena_size)
environment = Environment(
node_pos=fish_pos,
distortion=distortion,
prob_type='binary',
noise_magnitude=0.1,
conn_thres=6,
verbose=verbose
)
interaction = Interaction(environment, verbose=verbose)
channel = Channel(environment)
fish = generate_fish(
n=num_fish,
channel=channel,
interaction=interaction,
lim_neighbors=[2,3],
neighbor_weights=1.0,
fish_max_speeds=1,
clock_freqs=clock_freqs,
verbose=verbose
)
channel.set_nodes(fish)
observer = Observer(fish=fish, environment=environment, channel=channel)
missing_aircraft = InfoExternal('signal_aircraft')
for i in range(1, run_time):
observer.instruct(event=missing_aircraft, rel_clock=i, pos=np.array([arena_center-5, arena_center+5]))
run_simulation(fish=fish, observer=observer, run_time=run_time, dark=True, white_axis=False, no_legend=True)
###Output
Please wait patiently 30 seconds. Thanks.
It's time to say bye bye!
###Markdown
Hear the signal of the star and use vision to swim towards itRobots disperse until the first one hears the signal from the star. That robot broadcasts the info that it has detected the star. Thereupon, all robots switch to aggregation except robots that can hear the star swim towards it by using their perception. This pulls the center of the collective and therefore all robots to the star.
###Code
"""Hear star, see star, gather at star
"""
from events import Homing
run_time = 80 # in seconds
num_fish = 20
arena_size = 30
arena_center = arena_size / 2.0
initial_spread = 1
fish_pos = initial_spread * np.random.rand(num_fish, 2) + arena_center - initial_spread / 2.0
clock_freqs = 1
verbose = False
distortion = generate_distortion(type='none', n=arena_size)
environment = Environment(
node_pos=fish_pos,
distortion=distortion,
prob_type='binary',
noise_magnitude=0.1,
conn_thres=8,
verbose=verbose
)
interaction = Interaction(environment, verbose=verbose)
channel = Channel(environment)
fish = generate_fish(
n=num_fish,
channel=channel,
interaction=interaction,
lim_neighbors=[2,3],
neighbor_weights=1.0,
fish_max_speeds=1,
clock_freqs=clock_freqs,
verbose=verbose
)
channel.set_nodes(fish)
observer = Observer(fish=fish, environment=environment, channel=channel)
missing_aircraft = Homing()
for i in range(1, run_time):
observer.instruct(event=missing_aircraft, rel_clock=i, pos=np.array([arena_center-6, arena_center+6]))
run_simulation(fish=fish, observer=observer, run_time=run_time, dark=True)
###Output
Please wait patiently 80 seconds. Thanks.
It's time to say bye bye!
|
Data/Processes/Suma/.ipynb_checkpoints/Clusters-checkpoint.ipynb | ###Markdown
Se visualiza los datos y se elimina las columnas que no son necesarias
###Code
df = pd.read_csv('Suma_todasLasSesiones.csv')
df = df.drop(['Sesion','Id'], axis=1)
#df = df[df['Fsm']!=0]
###Output
_____no_output_____
###Markdown
Filtrado de datos Histograma de las notas
###Code
plt.rcParams['figure.figsize'] = (16, 9)
plt.style.use('ggplot')
datos = df.drop(['Nota'],1).hist()
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Se crean los datos para el clusters y las categorias
###Code
clusters = df[['Nota']]
X = df.drop(['Nota'],1)
## Se reliza la normalización de los datos para que esten en un rango de (0,1)
scaler = MinMaxScaler(feature_range=(0, 1))
x = scaler.fit_transform(X)
###Output
_____no_output_____
###Markdown
Se definen los metodos a emplear en el cluster
###Code
def clusterDBscan(x):
db = cluster.DBSCAN(eps=0.175, min_samples=5)
db.fit(x)
return db.labels_
def clusterKMeans(x, n_clusters):
return cluster.k_means(x, n_clusters=n_clusters)[1]
###Output
_____no_output_____
###Markdown
Se crea funciones en caso de ser necesarias para poder reducir las dimensiones
###Code
def reducir_dim(x, ndim):
pca = PCA(n_components=ndim)
return pca.fit_transform(x)
def reducir_dim_tsne(x, ndim):
pca = TSNE(n_components=ndim)
return pca.fit_transform(x)
###Output
_____no_output_____
###Markdown
Se grafica los valores de los posibles cluster en base a silohuette score
###Code
def calculaSilhoutter(x, clusters):
res=[]
fig, ax = plt.subplots(1,figsize=(20, 5))
for numCluster in range(2, 7):
res.append(silhouette_score(x, clusterKMeans(x,numCluster )))
ax.plot(range(2, 7), res)
ax.set_xlabel("n clusters")
ax.set_ylabel("silouhette score")
ax.set_title("K-Means")
calculaSilhoutter(x, clusters)
###Output
_____no_output_____
###Markdown
Se grafica los valores de los posibles cluster en base a Elbow Method
###Code
model = KMeans()
visualizer = KElbowVisualizer(model, k=(2,7), metric='calinski_harabasz', timings=False)
visualizer.fit(x) # Fit the data to the visualizer
visualizer.show()
clus_km = clusterKMeans(x, 3)
clus_db = clusterDBscan(x)
def reducir_dataset(x, how):
if how == "pca":
res = reducir_dim(x, ndim=2)
elif how == "tsne":
res = reducir_dim_tsne(x, ndim=2)
else:
return x[:, :2]
return res
results = pd.DataFrame(np.column_stack([reducir_dataset(x, how="tsne"), clusters, clus_km, clus_db]), columns=["x", "y", "clusters", "clus_km", "clus_db"])
def mostrar_resultados(res):
"""Muestra los resultados de los algoritmos
"""
fig, ax = plt.subplots(1, 3, figsize=(20, 5))
sns.scatterplot(data=res, x="x", y="y", hue="clusters", ax=ax[0], legend="full")
ax[0].set_title('Ground Truth')
sns.scatterplot(data=res, x="x", y="y", hue="clus_km", ax=ax[1], legend="full")
ax[1].set_title('K-Means')
sns.scatterplot(data=res, x="x", y="y", hue="clus_db", ax=ax[2], legend="full")
ax[2].set_title('DBSCAN')
mostrar_resultados(results)
kmeans = KMeans(n_clusters=3,init = "k-means++")
kmeans.fit(x)
labels = kmeans.predict(x)
X['Cluster_Km']=labels
X.groupby('Cluster_Km').mean()
###Output
_____no_output_____
###Markdown
DBSCAN
###Code
neigh = NearestNeighbors(n_neighbors=2)
nbrs = neigh.fit(x)
distances, indices = nbrs.kneighbors(x)
distances = np.sort(distances, axis=0)
distances = distances[:,1]
plt.plot(distances)
dbscan = cluster.DBSCAN(eps=0.175, min_samples=5)
dbscan.fit(x)
clusterDbscan = dbscan.labels_
X['Cluster_DB']=clusterDbscan
X.groupby('Cluster_DB').mean()
X
###Output
_____no_output_____ |
nbs/03a_parallel.ipynb | ###Markdown
Parallel> Threading and multiprocessing functions
###Code
#export
def threaded(f):
"Run `f` in a thread, and returns the thread"
@wraps(f)
def _f(*args, **kwargs):
res = Thread(target=f, args=args, kwargs=kwargs)
res.start()
return res
return _f
@threaded
def _1():
time.sleep(0.05)
print("second")
@threaded
def _2():
time.sleep(0.01)
print("first")
_1()
_2()
time.sleep(0.1)
#export
def startthread(f):
"Like `threaded`, but start thread immediately"
threaded(f)()
@startthread
def _():
time.sleep(0.05)
print("second")
@startthread
def _():
time.sleep(0.01)
print("first")
time.sleep(0.1)
#export
def set_num_threads(nt):
"Get numpy (and others) to use `nt` threads"
try: import mkl; mkl.set_num_threads(nt)
except: pass
try: import torch; torch.set_num_threads(nt)
except: pass
os.environ['IPC_ENABLE']='1'
for o in ['OPENBLAS_NUM_THREADS','NUMEXPR_NUM_THREADS','OMP_NUM_THREADS','MKL_NUM_THREADS']:
os.environ[o] = str(nt)
###Output
_____no_output_____
###Markdown
This sets the number of threads consistently for many tools, by:1. Set the following environment variables equal to `nt`: `OPENBLAS_NUM_THREADS`,`NUMEXPR_NUM_THREADS`,`OMP_NUM_THREADS`,`MKL_NUM_THREADS`2. Sets `nt` threads for numpy and pytorch.
###Code
#export
def _call(lock, pause, n, g, item):
l = False
if pause:
try:
l = lock.acquire(timeout=pause*(n+2))
time.sleep(pause)
finally:
if l: lock.release()
return g(item)
#export
class ThreadPoolExecutor(concurrent.futures.ThreadPoolExecutor):
"Same as Python's ThreadPoolExecutor, except can pass `max_workers==0` for serial execution"
def __init__(self, max_workers=defaults.cpus, on_exc=print, pause=0, **kwargs):
if max_workers is None: max_workers=defaults.cpus
store_attr()
self.not_parallel = max_workers==0
if self.not_parallel: max_workers=1
super().__init__(max_workers, **kwargs)
def map(self, f, items, *args, timeout=None, chunksize=1, **kwargs):
if self.not_parallel == False: self.lock = Manager().Lock()
g = partial(f, *args, **kwargs)
if self.not_parallel: return map(g, items)
_g = partial(_call, self.lock, self.pause, self.max_workers, g)
try: return super().map(_g, items, timeout=timeout, chunksize=chunksize)
except Exception as e: self.on_exc(e)
show_doc(ThreadPoolExecutor, title_level=4)
#export
class ProcessPoolExecutor(concurrent.futures.ProcessPoolExecutor):
"Same as Python's ProcessPoolExecutor, except can pass `max_workers==0` for serial execution"
def __init__(self, max_workers=defaults.cpus, on_exc=print, pause=0, **kwargs):
if max_workers is None: max_workers=defaults.cpus
store_attr()
self.not_parallel = max_workers==0
if self.not_parallel: max_workers=1
super().__init__(max_workers, **kwargs)
def map(self, f, items, *args, timeout=None, chunksize=1, **kwargs):
if self.not_parallel == False: self.lock = Manager().Lock()
g = partial(f, *args, **kwargs)
if self.not_parallel: return map(g, items)
_g = partial(_call, self.lock, self.pause, self.max_workers, g)
try: return super().map(_g, items, timeout=timeout, chunksize=chunksize)
except Exception as e: self.on_exc(e)
show_doc(ProcessPoolExecutor, title_level=4)
#export
try: from fastprogress import progress_bar
except: progress_bar = None
#export
def parallel(f, items, *args, n_workers=defaults.cpus, total=None, progress=None, pause=0,
threadpool=False, timeout=None, chunksize=1, **kwargs):
"Applies `func` in parallel to `items`, using `n_workers`"
pool = ThreadPoolExecutor if threadpool else ProcessPoolExecutor
with pool(n_workers, pause=pause) as ex:
r = ex.map(f,items, *args, timeout=timeout, chunksize=chunksize, **kwargs)
if progress and progress_bar:
if total is None: total = len(items)
r = progress_bar(r, total=total, leave=False)
return L(r)
def add_one(x, a=1):
time.sleep(random.random()/80)
return x+a
inp,exp = range(50),range(1,51)
if sys.platform != "win32":
test_eq(parallel(add_one, inp, n_workers=2, progress=False), exp)
test_eq(parallel(add_one, inp, threadpool=True, n_workers=2, progress=False), exp)
test_eq(parallel(add_one, inp, n_workers=1, a=2), range(2,52))
test_eq(parallel(add_one, inp, n_workers=0), exp)
test_eq(parallel(add_one, inp, n_workers=0, a=2), range(2,52))
###Output
_____no_output_____
###Markdown
Use the `pause` parameter to ensure a pause of `pause` seconds between processes starting. This is in case there are race conditions in starting some process, or to stagger the time each process starts, for example when making many requests to a webserver. Set `threadpool=True` to use `ThreadPoolExecutor` instead of `ProcessPoolExecutor`.
###Code
from datetime import datetime
def print_time(i):
time.sleep(random.random()/1000)
print(i, datetime.now())
test_n_workers = 0 if sys.platform == "win32" else 2
parallel(print_time, range(5), n_workers=test_n_workers, pause=0.25);
###Output
0 2021-01-22 21:17:38.942321
1 2021-01-22 21:17:39.192929
2 2021-01-22 21:17:39.444098
3 2021-01-22 21:17:39.695087
4 2021-01-22 21:17:39.946463
###Markdown
Note that `f` should accept a collection of items.
###Code
#export
def run_procs(f, f_done, args):
"Call `f` for each item in `args` in parallel, yielding `f_done`"
processes = L(args).map(Process, args=arg0, target=f)
for o in processes: o.start()
yield from f_done()
processes.map(Self.join())
#export
def _f_pg(obj, queue, batch, start_idx):
for i,b in enumerate(obj(batch)): queue.put((start_idx+i,b))
def _done_pg(queue, items): return (queue.get() for _ in items)
#export
def parallel_gen(cls, items, n_workers=defaults.cpus, **kwargs):
"Instantiate `cls` in `n_workers` procs & call each on a subset of `items` in parallel."
if n_workers==0:
yield from enumerate(list(cls(**kwargs)(items)))
return
batches = L(chunked(items, n_chunks=n_workers))
idx = L(itertools.accumulate(0 + batches.map(len)))
queue = Queue()
if progress_bar: items = progress_bar(items, leave=False)
f=partial(_f_pg, cls(**kwargs), queue)
done=partial(_done_pg, queue, items)
yield from run_procs(f, done, L(batches,idx).zip())
class _C:
def __call__(self, o): return ((i+1) for i in o)
items = range(5)
res = L(parallel_gen(_C, items, n_workers=0))
idxs,dat1 = zip(*res.sorted(itemgetter(0)))
test_eq(dat1, range(1,6))
if sys.platform != "win32":
res = L(parallel_gen(_C, items, n_workers=3))
idxs,dat2 = zip(*res.sorted(itemgetter(0)))
test_eq(dat2, dat1)
###Output
_____no_output_____
###Markdown
`cls` is any class with `__call__`. It will be passed `args` and `kwargs` when initialized. Note that `n_workers` instances of `cls` are created, one in each process. `items` are then split in `n_workers` batches and one is sent to each `cls`. The function then returns a generator of tuples of item indices and results.
###Code
class TestSleepyBatchFunc:
"For testing parallel processes that run at different speeds"
def __init__(self): self.a=1
def __call__(self, batch):
for k in batch:
time.sleep(random.random()/4)
yield k+self.a
x = np.linspace(0,0.99,20)
test_n_workers = 0 if sys.platform == "win32" else 2
res = L(parallel_gen(TestSleepyBatchFunc, x, n_workers=test_n_workers))
test_eq(res.sorted().itemgot(1), x+1)
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_test.ipynb.
Converted 01_basics.ipynb.
Converted 02_foundation.ipynb.
Converted 03_xtras.ipynb.
Converted 03a_parallel.ipynb.
Converted 03b_net.ipynb.
Converted 04_dispatch.ipynb.
Converted 05_transform.ipynb.
Converted 07_meta.ipynb.
Converted 08_script.ipynb.
Converted index.ipynb.
###Markdown
Parallel> Threading and multiprocessing functions
###Code
#export
def threaded(f):
"Run `f` in a thread, and returns the thread"
@wraps(f)
def _f(*args, **kwargs):
res = Thread(target=f, args=args, kwargs=kwargs)
res.start()
return res
return _f
@threaded
def _1():
time.sleep(0.05)
print("second")
@threaded
def _2():
time.sleep(0.01)
print("first")
_1()
_2()
time.sleep(0.1)
#export
def startthread(f):
"Like `threaded`, but start thread immediately"
threaded(f)()
@startthread
def _():
time.sleep(0.05)
print("second")
@startthread
def _():
time.sleep(0.01)
print("first")
time.sleep(0.1)
#export
def set_num_threads(nt):
"Get numpy (and others) to use `nt` threads"
try: import mkl; mkl.set_num_threads(nt)
except: pass
try: import torch; torch.set_num_threads(nt)
except: pass
os.environ['IPC_ENABLE']='1'
for o in ['OPENBLAS_NUM_THREADS','NUMEXPR_NUM_THREADS','OMP_NUM_THREADS','MKL_NUM_THREADS']:
os.environ[o] = str(nt)
###Output
_____no_output_____
###Markdown
This sets the number of threads consistently for many tools, by:1. Set the following environment variables equal to `nt`: `OPENBLAS_NUM_THREADS`,`NUMEXPR_NUM_THREADS`,`OMP_NUM_THREADS`,`MKL_NUM_THREADS`2. Sets `nt` threads for numpy and pytorch.
###Code
#export
def _call(lock, pause, n, g, item):
l = False
if pause:
try:
l = lock.acquire(timeout=pause*(n+2))
time.sleep(pause)
finally:
if l: lock.release()
return g(item)
#export
def parallelable(param_name, num_workers, f=None):
f_in_main = f == None or sys.modules[f.__module__].__name__ == "__main__"
if sys.platform == "win32" and IN_NOTEBOOK and num_workers > 0 and f_in_main:
print("Due to IPython and Windows limitation, python multiprocessing isn't available now.")
print(f"So `{param_name}` has to be changed to 0 to avoid getting stuck")
return False
return True
#export
class ThreadPoolExecutor(concurrent.futures.ThreadPoolExecutor):
"Same as Python's ThreadPoolExecutor, except can pass `max_workers==0` for serial execution"
def __init__(self, max_workers=defaults.cpus, on_exc=print, pause=0, **kwargs):
if max_workers is None: max_workers=defaults.cpus
store_attr()
self.not_parallel = max_workers==0
if self.not_parallel: max_workers=1
super().__init__(max_workers, **kwargs)
def map(self, f, items, *args, timeout=None, chunksize=1, **kwargs):
if self.not_parallel == False: self.lock = Manager().Lock()
g = partial(f, *args, **kwargs)
if self.not_parallel: return map(g, items)
_g = partial(_call, self.lock, self.pause, self.max_workers, g)
try: return super().map(_g, items, timeout=timeout, chunksize=chunksize)
except Exception as e: self.on_exc(e)
show_doc(ThreadPoolExecutor, title_level=4)
#export
class ProcessPoolExecutor(concurrent.futures.ProcessPoolExecutor):
"Same as Python's ProcessPoolExecutor, except can pass `max_workers==0` for serial execution"
def __init__(self, max_workers=defaults.cpus, on_exc=print, pause=0, **kwargs):
if max_workers is None: max_workers=defaults.cpus
store_attr()
self.not_parallel = max_workers==0
if self.not_parallel: max_workers=1
super().__init__(max_workers, **kwargs)
def map(self, f, items, *args, timeout=None, chunksize=1, **kwargs):
if not parallelable('max_workers', self.max_workers, f): self.max_workers = 0
self.not_parallel = self.max_workers==0
if self.not_parallel: self.max_workers=1
if self.not_parallel == False: self.lock = Manager().Lock()
g = partial(f, *args, **kwargs)
if self.not_parallel: return map(g, items)
_g = partial(_call, self.lock, self.pause, self.max_workers, g)
try: return super().map(_g, items, timeout=timeout, chunksize=chunksize)
except Exception as e: self.on_exc(e)
show_doc(ProcessPoolExecutor, title_level=4)
#export
try: from fastprogress import progress_bar
except: progress_bar = None
#export
def parallel(f, items, *args, n_workers=defaults.cpus, total=None, progress=None, pause=0,
threadpool=False, timeout=None, chunksize=1, **kwargs):
"Applies `func` in parallel to `items`, using `n_workers`"
pool = ThreadPoolExecutor if threadpool else ProcessPoolExecutor
with pool(n_workers, pause=pause) as ex:
r = ex.map(f,items, *args, timeout=timeout, chunksize=chunksize, **kwargs)
if progress and progress_bar:
if total is None: total = len(items)
r = progress_bar(r, total=total, leave=False)
return L(r)
#export
def add_one(x, a=1):
# this import is necessary for multiprocessing in notebook on windows
import random
time.sleep(random.random()/80)
return x+a
inp,exp = range(50),range(1,51)
test_eq(parallel(add_one, inp, n_workers=2, progress=False), exp)
test_eq(parallel(add_one, inp, threadpool=True, n_workers=2, progress=False), exp)
test_eq(parallel(add_one, inp, n_workers=1, a=2), range(2,52))
test_eq(parallel(add_one, inp, n_workers=0), exp)
test_eq(parallel(add_one, inp, n_workers=0, a=2), range(2,52))
###Output
_____no_output_____
###Markdown
Use the `pause` parameter to ensure a pause of `pause` seconds between processes starting. This is in case there are race conditions in starting some process, or to stagger the time each process starts, for example when making many requests to a webserver. Set `threadpool=True` to use `ThreadPoolExecutor` instead of `ProcessPoolExecutor`.
###Code
from datetime import datetime
def print_time(i):
time.sleep(random.random()/1000)
print(i, datetime.now())
parallel(print_time, range(5), n_workers=2, pause=0.25);
###Output
0 2021-10-30 06:33:53.045670
1 2021-10-30 06:33:53.296746
2 2021-10-30 06:33:53.549248
3 2021-10-30 06:33:53.801336
4 2021-10-30 06:33:54.052961
###Markdown
Note that `f` should accept a collection of items.
###Code
#export
def run_procs(f, f_done, args):
"Call `f` for each item in `args` in parallel, yielding `f_done`"
processes = L(args).map(Process, args=arg0, target=f)
for o in processes: o.start()
yield from f_done()
processes.map(Self.join())
#export
def _f_pg(obj, queue, batch, start_idx):
for i,b in enumerate(obj(batch)): queue.put((start_idx+i,b))
def _done_pg(queue, items): return (queue.get() for _ in items)
#export
def parallel_gen(cls, items, n_workers=defaults.cpus, **kwargs):
"Instantiate `cls` in `n_workers` procs & call each on a subset of `items` in parallel."
if not parallelable('n_workers', n_workers): n_workers = 0
if n_workers==0:
yield from enumerate(list(cls(**kwargs)(items)))
return
batches = L(chunked(items, n_chunks=n_workers))
idx = L(itertools.accumulate(0 + batches.map(len)))
queue = Queue()
if progress_bar: items = progress_bar(items, leave=False)
f=partial(_f_pg, cls(**kwargs), queue)
done=partial(_done_pg, queue, items)
yield from run_procs(f, done, L(batches,idx).zip())
class _C:
def __call__(self, o): return ((i+1) for i in o)
items = range(5)
res = L(parallel_gen(_C, items, n_workers=0))
idxs,dat1 = zip(*res.sorted(itemgetter(0)))
test_eq(dat1, range(1,6))
res = L(parallel_gen(_C, items, n_workers=3))
idxs,dat2 = zip(*res.sorted(itemgetter(0)))
test_eq(dat2, dat1)
###Output
_____no_output_____
###Markdown
`cls` is any class with `__call__`. It will be passed `args` and `kwargs` when initialized. Note that `n_workers` instances of `cls` are created, one in each process. `items` are then split in `n_workers` batches and one is sent to each `cls`. The function then returns a generator of tuples of item indices and results.
###Code
class TestSleepyBatchFunc:
"For testing parallel processes that run at different speeds"
def __init__(self): self.a=1
def __call__(self, batch):
for k in batch:
time.sleep(random.random()/4)
yield k+self.a
x = np.linspace(0,0.99,20)
res = L(parallel_gen(TestSleepyBatchFunc, x, n_workers=2))
test_eq(res.sorted().itemgot(1), x+1)
#hide
from subprocess import Popen, PIPE
# test num_workers > 0 in scripts works when python process start method is spawn
process = Popen(["python", "parallel_test.py"], stdout=PIPE)
_, err = process.communicate(timeout=5)
exit_code = process.wait()
test_eq(exit_code, 0)
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_test.ipynb.
Converted 01_basics.ipynb.
Converted 02_foundation.ipynb.
Converted 03_xtras.ipynb.
Converted 03a_parallel.ipynb.
Converted 03b_net.ipynb.
Converted 04_dispatch.ipynb.
Converted 05_transform.ipynb.
Converted 06_docments.ipynb.
Converted 07_meta.ipynb.
Converted 08_script.ipynb.
Converted index.ipynb.
Converted parallel_win.ipynb.
###Markdown
Parallel> Threading and multiprocessing functions
###Code
#export
def threaded(f):
"Run `f` in a thread, and returns the thread"
@wraps(f)
def _f(*args, **kwargs):
res = Thread(target=f, args=args, kwargs=kwargs)
res.start()
return res
return _f
@threaded
def _1():
time.sleep(0.05)
print("second")
@threaded
def _2():
time.sleep(0.01)
print("first")
_1()
_2()
time.sleep(0.1)
#export
def startthread(f):
"Like `threaded`, but start thread immediately"
threaded(f)()
@startthread
def _():
time.sleep(0.05)
print("second")
@startthread
def _():
time.sleep(0.01)
print("first")
time.sleep(0.1)
#export
def set_num_threads(nt):
"Get numpy (and others) to use `nt` threads"
try: import mkl; mkl.set_num_threads(nt)
except: pass
try: import torch; torch.set_num_threads(nt)
except: pass
os.environ['IPC_ENABLE']='1'
for o in ['OPENBLAS_NUM_THREADS','NUMEXPR_NUM_THREADS','OMP_NUM_THREADS','MKL_NUM_THREADS']:
os.environ[o] = str(nt)
###Output
_____no_output_____
###Markdown
This sets the number of threads consistently for many tools, by:1. Set the following environment variables equal to `nt`: `OPENBLAS_NUM_THREADS`,`NUMEXPR_NUM_THREADS`,`OMP_NUM_THREADS`,`MKL_NUM_THREADS`2. Sets `nt` threads for numpy and pytorch.
###Code
#export
def _call(lock, pause, n, g, item):
l = False
if pause:
try:
l = lock.acquire(timeout=pause*(n+2))
time.sleep(pause)
finally:
if l: lock.release()
return g(item)
#export
class ThreadPoolExecutor(concurrent.futures.ThreadPoolExecutor):
"Same as Python's ThreadPoolExecutor, except can pass `max_workers==0` for serial execution"
def __init__(self, max_workers=defaults.cpus, on_exc=print, pause=0, **kwargs):
if max_workers is None: max_workers=defaults.cpus
store_attr()
self.not_parallel = max_workers==0
if self.not_parallel: max_workers=1
super().__init__(max_workers, **kwargs)
def map(self, f, items, *args, timeout=None, chunksize=1, **kwargs):
self.lock = Manager().Lock()
g = partial(f, *args, **kwargs)
if self.not_parallel: return map(g, items)
_g = partial(_call, self.lock, self.pause, self.max_workers, g)
try: return super().map(_g, items, timeout=timeout, chunksize=chunksize)
except Exception as e: self.on_exc(e)
show_doc(ThreadPoolExecutor, title_level=4)
#export
class ProcessPoolExecutor(concurrent.futures.ProcessPoolExecutor):
"Same as Python's ProcessPoolExecutor, except can pass `max_workers==0` for serial execution"
def __init__(self, max_workers=defaults.cpus, on_exc=print, pause=0, **kwargs):
if max_workers is None: max_workers=defaults.cpus
store_attr()
self.not_parallel = max_workers==0
if self.not_parallel: max_workers=1
super().__init__(max_workers, **kwargs)
def map(self, f, items, *args, timeout=None, chunksize=1, **kwargs):
self.lock = Manager().Lock()
g = partial(f, *args, **kwargs)
if self.not_parallel: return map(g, items)
_g = partial(_call, self.lock, self.pause, self.max_workers, g)
try: return super().map(_g, items, timeout=timeout, chunksize=chunksize)
except Exception as e: self.on_exc(e)
show_doc(ProcessPoolExecutor, title_level=4)
#export
try: from fastprogress import progress_bar
except: progress_bar = None
#export
def parallel(f, items, *args, n_workers=defaults.cpus, total=None, progress=None, pause=0,
threadpool=False, timeout=None, chunksize=1, **kwargs):
"Applies `func` in parallel to `items`, using `n_workers`"
if progress is None: progress = progress_bar is not None
pool = ThreadPoolExecutor if threadpool else ProcessPoolExecutor
with pool(n_workers, pause=pause) as ex:
r = ex.map(f,items, *args, timeout=timeout, chunksize=chunksize, **kwargs)
if progress:
if total is None: total = len(items)
r = progress_bar(r, total=total, leave=False)
return L(r)
def add_one(x, a=1):
time.sleep(random.random()/80)
return x+a
inp,exp = range(50),range(1,51)
test_eq(parallel(add_one, inp, n_workers=2, progress=False), exp)
test_eq(parallel(add_one, inp, threadpool=True, n_workers=2, progress=False), exp)
test_eq(parallel(add_one, inp, n_workers=0), exp)
test_eq(parallel(add_one, inp, n_workers=1, a=2), range(2,52))
test_eq(parallel(add_one, inp, n_workers=0, a=2), range(2,52))
###Output
_____no_output_____
###Markdown
Use the `pause` parameter to ensure a pause of `pause` seconds between processes starting. This is in case there are race conditions in starting some process, or to stagger the time each process starts, for example when making many requests to a webserver. Set `threadpool=True` to use `ThreadPoolExecutor` instead of `ProcessPoolExecutor`.
###Code
from datetime import datetime
def print_time(i):
time.sleep(random.random()/1000)
print(i, datetime.now())
parallel(print_time, range(5), n_workers=2, pause=0.25);
###Output
_____no_output_____
###Markdown
Note that `f` should accept a collection of items.
###Code
#export
def run_procs(f, f_done, args):
"Call `f` for each item in `args` in parallel, yielding `f_done`"
processes = L(args).map(Process, args=arg0, target=f)
for o in processes: o.start()
yield from f_done()
processes.map(Self.join())
#export
def _f_pg(obj, queue, batch, start_idx):
for i,b in enumerate(obj(batch)): queue.put((start_idx+i,b))
def _done_pg(queue, items): return (queue.get() for _ in items)
#export
def parallel_gen(cls, items, n_workers=defaults.cpus, **kwargs):
"Instantiate `cls` in `n_workers` procs & call each on a subset of `items` in parallel."
if n_workers==0:
yield from enumerate(list(cls(**kwargs)(items)))
return
batches = L(chunked(items, n_chunks=n_workers))
idx = L(itertools.accumulate(0 + batches.map(len)))
queue = Queue()
if progress_bar: items = progress_bar(items, leave=False)
f=partial(_f_pg, cls(**kwargs), queue)
done=partial(_done_pg, queue, items)
yield from run_procs(f, done, L(batches,idx).zip())
class _C:
def __call__(self, o): return ((i+1) for i in o)
items = range(5)
res = L(parallel_gen(_C, items, n_workers=3))
idxs,dat1 = zip(*res.sorted(itemgetter(0)))
test_eq(dat1, range(1,6))
res = L(parallel_gen(_C, items, n_workers=0))
idxs,dat2 = zip(*res.sorted(itemgetter(0)))
test_eq(dat2, dat1)
###Output
_____no_output_____
###Markdown
`cls` is any class with `__call__`. It will be passed `args` and `kwargs` when initialized. Note that `n_workers` instances of `cls` are created, one in each process. `items` are then split in `n_workers` batches and one is sent to each `cls`. The function then returns a generator of tuples of item indices and results.
###Code
class TestSleepyBatchFunc:
"For testing parallel processes that run at different speeds"
def __init__(self): self.a=1
def __call__(self, batch):
for k in batch:
time.sleep(random.random()/4)
yield k+self.a
x = np.linspace(0,0.99,20)
res = L(parallel_gen(TestSleepyBatchFunc, x, n_workers=2))
test_eq(res.sorted().itemgot(1), x+1)
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_test.ipynb.
Converted 01_basics.ipynb.
Converted 02_foundation.ipynb.
Converted 03_xtras.ipynb.
Converted 03a_parallel.ipynb.
Converted 03b_net.ipynb.
Converted 04_dispatch.ipynb.
Converted 05_transform.ipynb.
Converted 07_meta.ipynb.
Converted 08_script.ipynb.
Converted index.ipynb.
###Markdown
Parallel> Threading and multiprocessing functions
###Code
#export
def threaded(f):
"Run `f` in a thread, and returns the thread"
@wraps(f)
def _f(*args, **kwargs):
res = Thread(target=f, args=args, kwargs=kwargs)
res.start()
return res
return _f
@threaded
def _1():
time.sleep(0.05)
print("second")
@threaded
def _2():
time.sleep(0.01)
print("first")
_1()
_2()
time.sleep(0.1)
#export
def startthread(f):
"Like `threaded`, but start thread immediately"
threaded(f)()
@startthread
def _():
time.sleep(0.05)
print("second")
@startthread
def _():
time.sleep(0.01)
print("first")
time.sleep(0.1)
#export
def set_num_threads(nt):
"Get numpy (and others) to use `nt` threads"
try: import mkl; mkl.set_num_threads(nt)
except: pass
try: import torch; torch.set_num_threads(nt)
except: pass
os.environ['IPC_ENABLE']='1'
for o in ['OPENBLAS_NUM_THREADS','NUMEXPR_NUM_THREADS','OMP_NUM_THREADS','MKL_NUM_THREADS']:
os.environ[o] = str(nt)
###Output
_____no_output_____
###Markdown
This sets the number of threads consistently for many tools, by:1. Set the following environment variables equal to `nt`: `OPENBLAS_NUM_THREADS`,`NUMEXPR_NUM_THREADS`,`OMP_NUM_THREADS`,`MKL_NUM_THREADS`2. Sets `nt` threads for numpy and pytorch.
###Code
#export
def _call(lock, pause, n, g, item):
l = False
if pause:
try:
l = lock.acquire(timeout=pause*(n+2))
time.sleep(pause)
finally:
if l: lock.release()
return g(item)
#export
def parallelable(param_name, num_workers, f=None):
f_in_main = f == None or sys.modules[f.__module__].__name__ == "__main__"
if sys.platform == "win32" and IN_NOTEBOOK and num_workers > 0 and f_in_main:
print("Due to IPython and Windows limitation, python multiprocessing isn't available now.")
print(f"So `{param_name}` has to be changed to 0 to avoid getting stuck")
return False
return True
#export
class ThreadPoolExecutor(concurrent.futures.ThreadPoolExecutor):
"Same as Python's ThreadPoolExecutor, except can pass `max_workers==0` for serial execution"
def __init__(self, max_workers=defaults.cpus, on_exc=print, pause=0, **kwargs):
if max_workers is None: max_workers=defaults.cpus
store_attr()
self.not_parallel = max_workers==0
if self.not_parallel: max_workers=1
super().__init__(max_workers, **kwargs)
def map(self, f, items, *args, timeout=None, chunksize=1, **kwargs):
if self.not_parallel == False: self.lock = Manager().Lock()
g = partial(f, *args, **kwargs)
if self.not_parallel: return map(g, items)
_g = partial(_call, self.lock, self.pause, self.max_workers, g)
try: return super().map(_g, items, timeout=timeout, chunksize=chunksize)
except Exception as e: self.on_exc(e)
show_doc(ThreadPoolExecutor, title_level=4)
#export
class ProcessPoolExecutor(concurrent.futures.ProcessPoolExecutor):
"Same as Python's ProcessPoolExecutor, except can pass `max_workers==0` for serial execution"
def __init__(self, max_workers=defaults.cpus, on_exc=print, pause=0, **kwargs):
if max_workers is None: max_workers=defaults.cpus
store_attr()
self.not_parallel = max_workers==0
if self.not_parallel: max_workers=1
super().__init__(max_workers, **kwargs)
def map(self, f, items, *args, timeout=None, chunksize=1, **kwargs):
if not parallelable('max_workers', self.max_workers, f): self.max_workers = 0
self.not_parallel = self.max_workers==0
if self.not_parallel: self.max_workers=1
if self.not_parallel == False: self.lock = Manager().Lock()
g = partial(f, *args, **kwargs)
if self.not_parallel: return map(g, items)
_g = partial(_call, self.lock, self.pause, self.max_workers, g)
try: return super().map(_g, items, timeout=timeout, chunksize=chunksize)
except Exception as e: self.on_exc(e)
show_doc(ProcessPoolExecutor, title_level=4)
#export
try: from fastprogress import progress_bar
except: progress_bar = None
#export
def parallel(f, items, *args, n_workers=defaults.cpus, total=None, progress=None, pause=0,
threadpool=False, timeout=None, chunksize=1, **kwargs):
"Applies `func` in parallel to `items`, using `n_workers`"
pool = ThreadPoolExecutor if threadpool else ProcessPoolExecutor
with pool(n_workers, pause=pause) as ex:
r = ex.map(f,items, *args, timeout=timeout, chunksize=chunksize, **kwargs)
if progress and progress_bar:
if total is None: total = len(items)
r = progress_bar(r, total=total, leave=False)
return L(r)
#export
def add_one(x, a=1):
# this import is necessary for multiprocessing in notebook on windows
import random
time.sleep(random.random()/80)
return x+a
inp,exp = range(50),range(1,51)
test_eq(parallel(add_one, inp, n_workers=2, progress=False), exp)
test_eq(parallel(add_one, inp, threadpool=True, n_workers=2, progress=False), exp)
test_eq(parallel(add_one, inp, n_workers=1, a=2), range(2,52))
test_eq(parallel(add_one, inp, n_workers=0), exp)
test_eq(parallel(add_one, inp, n_workers=0, a=2), range(2,52))
###Output
_____no_output_____
###Markdown
Use the `pause` parameter to ensure a pause of `pause` seconds between processes starting. This is in case there are race conditions in starting some process, or to stagger the time each process starts, for example when making many requests to a webserver. Set `threadpool=True` to use `ThreadPoolExecutor` instead of `ProcessPoolExecutor`.
###Code
from datetime import datetime
def print_time(i):
time.sleep(random.random()/1000)
print(i, datetime.now())
parallel(print_time, range(5), n_workers=2, pause=0.25);
###Output
0 2021-10-30 06:33:53.045670
1 2021-10-30 06:33:53.296746
2 2021-10-30 06:33:53.549248
3 2021-10-30 06:33:53.801336
4 2021-10-30 06:33:54.052961
###Markdown
Note that `f` should accept a collection of items.
###Code
#export
def run_procs(f, f_done, args):
"Call `f` for each item in `args` in parallel, yielding `f_done`"
processes = L(args).map(Process, args=arg0, target=f)
for o in processes: o.start()
yield from f_done()
processes.map(Self.join())
#export
def _f_pg(obj, queue, batch, start_idx):
for i,b in enumerate(obj(batch)): queue.put((start_idx+i,b))
def _done_pg(queue, items): return (queue.get() for _ in items)
#export
def parallel_gen(cls, items, n_workers=defaults.cpus, **kwargs):
"Instantiate `cls` in `n_workers` procs & call each on a subset of `items` in parallel."
if not parallelable('n_workers', n_workers): n_workers = 0
if n_workers==0:
yield from enumerate(list(cls(**kwargs)(items)))
return
batches = L(chunked(items, n_chunks=n_workers))
idx = L(itertools.accumulate(0 + batches.map(len)))
queue = Queue()
if progress_bar: items = progress_bar(items, leave=False)
f=partial(_f_pg, cls(**kwargs), queue)
done=partial(_done_pg, queue, items)
yield from run_procs(f, done, L(batches,idx).zip())
class _C:
def __call__(self, o): return ((i+1) for i in o)
items = range(5)
res = L(parallel_gen(_C, items, n_workers=0))
idxs,dat1 = zip(*res.sorted(itemgetter(0)))
test_eq(dat1, range(1,6))
res = L(parallel_gen(_C, items, n_workers=3))
idxs,dat2 = zip(*res.sorted(itemgetter(0)))
test_eq(dat2, dat1)
###Output
_____no_output_____
###Markdown
`cls` is any class with `__call__`. It will be passed `args` and `kwargs` when initialized. Note that `n_workers` instances of `cls` are created, one in each process. `items` are then split in `n_workers` batches and one is sent to each `cls`. The function then returns a generator of tuples of item indices and results.
###Code
class TestSleepyBatchFunc:
"For testing parallel processes that run at different speeds"
def __init__(self): self.a=1
def __call__(self, batch):
for k in batch:
time.sleep(random.random()/4)
yield k+self.a
x = np.linspace(0,0.99,20)
res = L(parallel_gen(TestSleepyBatchFunc, x, n_workers=2))
test_eq(res.sorted().itemgot(1), x+1)
#hide
from subprocess import Popen, PIPE
# test num_workers > 0 in scripts works when python process start method is spawn
process = Popen(["python", "parallel_test.py"], stdout=PIPE)
_, err = process.communicate(timeout=5)
exit_code = process.wait()
test_eq(exit_code, 0)
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_test.ipynb.
Converted 01_basics.ipynb.
Converted 02_foundation.ipynb.
Converted 03_xtras.ipynb.
Converted 03a_parallel.ipynb.
Converted 03b_net.ipynb.
Converted 04_dispatch.ipynb.
Converted 05_transform.ipynb.
Converted 06_docments.ipynb.
Converted 07_meta.ipynb.
Converted 08_script.ipynb.
Converted index.ipynb.
Converted parallel_win.ipynb.
###Markdown
Parallel> Threading and multiprocessing functions
###Code
#export
def threaded(f):
"Run `f` in a thread, and returns the thread"
@wraps(f)
def _f(*args, **kwargs):
res = Thread(target=f, args=args, kwargs=kwargs)
res.start()
return res
return _f
@threaded
def _1():
time.sleep(0.05)
print("second")
@threaded
def _2():
time.sleep(0.01)
print("first")
_1()
_2()
time.sleep(0.1)
#export
def startthread(f):
"Like `threaded`, but start thread immediately"
threaded(f)()
@startthread
def _():
time.sleep(0.05)
print("second")
@startthread
def _():
time.sleep(0.01)
print("first")
time.sleep(0.1)
#export
def set_num_threads(nt):
"Get numpy (and others) to use `nt` threads"
try: import mkl; mkl.set_num_threads(nt)
except: pass
try: import torch; torch.set_num_threads(nt)
except: pass
os.environ['IPC_ENABLE']='1'
for o in ['OPENBLAS_NUM_THREADS','NUMEXPR_NUM_THREADS','OMP_NUM_THREADS','MKL_NUM_THREADS']:
os.environ[o] = str(nt)
###Output
_____no_output_____
###Markdown
This sets the number of threads consistently for many tools, by:1. Set the following environment variables equal to `nt`: `OPENBLAS_NUM_THREADS`,`NUMEXPR_NUM_THREADS`,`OMP_NUM_THREADS`,`MKL_NUM_THREADS`2. Sets `nt` threads for numpy and pytorch.
###Code
#export
def _call(lock, pause, n, g, item):
l = False
if pause:
try:
l = lock.acquire(timeout=pause*(n+2))
time.sleep(pause)
finally:
if l: lock.release()
return g(item)
#export
class ThreadPoolExecutor(concurrent.futures.ThreadPoolExecutor):
"Same as Python's ThreadPoolExecutor, except can pass `max_workers==0` for serial execution"
def __init__(self, max_workers=defaults.cpus, on_exc=print, pause=0, **kwargs):
if max_workers is None: max_workers=defaults.cpus
store_attr()
self.not_parallel = max_workers==0
if self.not_parallel: max_workers=1
super().__init__(max_workers, **kwargs)
def map(self, f, items, *args, timeout=None, chunksize=1, **kwargs):
self.lock = Manager().Lock()
g = partial(f, *args, **kwargs)
if self.not_parallel: return map(g, items)
_g = partial(_call, self.lock, self.pause, self.max_workers, g)
try: return super().map(_g, items, timeout=timeout, chunksize=chunksize)
except Exception as e: self.on_exc(e)
show_doc(ThreadPoolExecutor, title_level=4)
#export
class ProcessPoolExecutor(concurrent.futures.ProcessPoolExecutor):
"Same as Python's ProcessPoolExecutor, except can pass `max_workers==0` for serial execution"
def __init__(self, max_workers=defaults.cpus, on_exc=print, pause=0, **kwargs):
if max_workers is None: max_workers=defaults.cpus
store_attr()
self.not_parallel = max_workers==0
if self.not_parallel: max_workers=1
super().__init__(max_workers, **kwargs)
def map(self, f, items, *args, timeout=None, chunksize=1, **kwargs):
self.lock = Manager().Lock()
g = partial(f, *args, **kwargs)
if self.not_parallel: return map(g, items)
_g = partial(_call, self.lock, self.pause, self.max_workers, g)
try: return super().map(_g, items, timeout=timeout, chunksize=chunksize)
except Exception as e: self.on_exc(e)
show_doc(ProcessPoolExecutor, title_level=4)
#export
try: from fastprogress import progress_bar
except: progress_bar = None
#export
def parallel(f, items, *args, n_workers=defaults.cpus, total=None, progress=None, pause=0,
threadpool=False, timeout=None, chunksize=1, **kwargs):
"Applies `func` in parallel to `items`, using `n_workers`"
pool = ThreadPoolExecutor if threadpool else ProcessPoolExecutor
with pool(n_workers, pause=pause) as ex:
r = ex.map(f,items, *args, timeout=timeout, chunksize=chunksize, **kwargs)
if progress and progress_bar:
if total is None: total = len(items)
r = progress_bar(r, total=total, leave=False)
return L(r)
def add_one(x, a=1):
time.sleep(random.random()/80)
return x+a
inp,exp = range(50),range(1,51)
test_eq(parallel(add_one, inp, n_workers=2, progress=False), exp)
test_eq(parallel(add_one, inp, threadpool=True, n_workers=2, progress=False), exp)
test_eq(parallel(add_one, inp, n_workers=0), exp)
test_eq(parallel(add_one, inp, n_workers=1, a=2), range(2,52))
test_eq(parallel(add_one, inp, n_workers=0, a=2), range(2,52))
###Output
_____no_output_____
###Markdown
Use the `pause` parameter to ensure a pause of `pause` seconds between processes starting. This is in case there are race conditions in starting some process, or to stagger the time each process starts, for example when making many requests to a webserver. Set `threadpool=True` to use `ThreadPoolExecutor` instead of `ProcessPoolExecutor`.
###Code
from datetime import datetime
def print_time(i):
time.sleep(random.random()/1000)
print(i, datetime.now())
parallel(print_time, range(5), n_workers=2, pause=0.25);
###Output
_____no_output_____
###Markdown
Note that `f` should accept a collection of items.
###Code
#export
def run_procs(f, f_done, args):
"Call `f` for each item in `args` in parallel, yielding `f_done`"
processes = L(args).map(Process, args=arg0, target=f)
for o in processes: o.start()
yield from f_done()
processes.map(Self.join())
#export
def _f_pg(obj, queue, batch, start_idx):
for i,b in enumerate(obj(batch)): queue.put((start_idx+i,b))
def _done_pg(queue, items): return (queue.get() for _ in items)
#export
def parallel_gen(cls, items, n_workers=defaults.cpus, **kwargs):
"Instantiate `cls` in `n_workers` procs & call each on a subset of `items` in parallel."
if n_workers==0:
yield from enumerate(list(cls(**kwargs)(items)))
return
batches = L(chunked(items, n_chunks=n_workers))
idx = L(itertools.accumulate(0 + batches.map(len)))
queue = Queue()
if progress_bar: items = progress_bar(items, leave=False)
f=partial(_f_pg, cls(**kwargs), queue)
done=partial(_done_pg, queue, items)
yield from run_procs(f, done, L(batches,idx).zip())
class _C:
def __call__(self, o): return ((i+1) for i in o)
items = range(5)
res = L(parallel_gen(_C, items, n_workers=3))
idxs,dat1 = zip(*res.sorted(itemgetter(0)))
test_eq(dat1, range(1,6))
res = L(parallel_gen(_C, items, n_workers=0))
idxs,dat2 = zip(*res.sorted(itemgetter(0)))
test_eq(dat2, dat1)
###Output
_____no_output_____
###Markdown
`cls` is any class with `__call__`. It will be passed `args` and `kwargs` when initialized. Note that `n_workers` instances of `cls` are created, one in each process. `items` are then split in `n_workers` batches and one is sent to each `cls`. The function then returns a generator of tuples of item indices and results.
###Code
class TestSleepyBatchFunc:
"For testing parallel processes that run at different speeds"
def __init__(self): self.a=1
def __call__(self, batch):
for k in batch:
time.sleep(random.random()/4)
yield k+self.a
x = np.linspace(0,0.99,20)
res = L(parallel_gen(TestSleepyBatchFunc, x, n_workers=2))
test_eq(res.sorted().itemgot(1), x+1)
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_test.ipynb.
Converted 01_basics.ipynb.
Converted 02_foundation.ipynb.
Converted 03_xtras.ipynb.
Converted 03a_parallel.ipynb.
Converted 03b_net.ipynb.
Converted 04_dispatch.ipynb.
Converted 05_transform.ipynb.
Converted 07_meta.ipynb.
Converted 08_script.ipynb.
Converted index.ipynb.
###Markdown
Parallel> Threading and multiprocessing functions
###Code
#export
def threaded(f):
"Run `f` in a thread, and returns the thread"
@wraps(f)
def _f(*args, **kwargs):
res = Thread(target=f, args=args, kwargs=kwargs)
res.start()
return res
return _f
@threaded
def _1():
time.sleep(0.05)
print("second")
@threaded
def _2():
time.sleep(0.01)
print("first")
_1()
_2()
time.sleep(0.1)
#export
def startthread(f):
"Like `threaded`, but start thread immediately"
threaded(f)()
@startthread
def _():
time.sleep(0.05)
print("second")
@startthread
def _():
time.sleep(0.01)
print("first")
time.sleep(0.1)
#export
def set_num_threads(nt):
"Get numpy (and others) to use `nt` threads"
try: import mkl; mkl.set_num_threads(nt)
except: pass
try: import torch; torch.set_num_threads(nt)
except: pass
os.environ['IPC_ENABLE']='1'
for o in ['OPENBLAS_NUM_THREADS','NUMEXPR_NUM_THREADS','OMP_NUM_THREADS','MKL_NUM_THREADS']:
os.environ[o] = str(nt)
###Output
_____no_output_____
###Markdown
This sets the number of threads consistently for many tools, by:1. Set the following environment variables equal to `nt`: `OPENBLAS_NUM_THREADS`,`NUMEXPR_NUM_THREADS`,`OMP_NUM_THREADS`,`MKL_NUM_THREADS`2. Sets `nt` threads for numpy and pytorch.
###Code
#export
def _call(lock, pause, n, g, item):
l = False
if pause:
try:
l = lock.acquire(timeout=pause*(n+2))
time.sleep(pause)
finally:
if l: lock.release()
return g(item)
#export
def parallelable(param_name, num_workers, f=None):
f_in_main = f == None or sys.modules[f.__module__].__name__ == "__main__"
if sys.platform == "win32" and IN_NOTEBOOK and num_workers > 0 and f_in_main:
print("Due to IPython and Windows limitation, python multiprocessing isn't available now.")
print(f"So `{param_name}` has to be changed to 0 to avoid getting stuck")
return False
return True
#export
class ThreadPoolExecutor(concurrent.futures.ThreadPoolExecutor):
"Same as Python's ThreadPoolExecutor, except can pass `max_workers==0` for serial execution"
def __init__(self, max_workers=defaults.cpus, on_exc=print, pause=0, **kwargs):
if max_workers is None: max_workers=defaults.cpus
store_attr()
self.not_parallel = max_workers==0
if self.not_parallel: max_workers=1
super().__init__(max_workers, **kwargs)
def map(self, f, items, *args, timeout=None, chunksize=1, **kwargs):
if self.not_parallel == False: self.lock = Manager().Lock()
g = partial(f, *args, **kwargs)
if self.not_parallel: return map(g, items)
_g = partial(_call, self.lock, self.pause, self.max_workers, g)
try: return super().map(_g, items, timeout=timeout, chunksize=chunksize)
except Exception as e: self.on_exc(e)
show_doc(ThreadPoolExecutor, title_level=4)
#export
class ProcessPoolExecutor(concurrent.futures.ProcessPoolExecutor):
"Same as Python's ProcessPoolExecutor, except can pass `max_workers==0` for serial execution"
def __init__(self, max_workers=defaults.cpus, on_exc=print, pause=0, **kwargs):
if max_workers is None: max_workers=defaults.cpus
store_attr()
self.not_parallel = max_workers==0
if self.not_parallel: max_workers=1
super().__init__(max_workers, **kwargs)
def map(self, f, items, *args, timeout=None, chunksize=1, **kwargs):
if not parallelable('max_workers', self.max_workers, f): self.max_workers = 0
self.not_parallel = self.max_workers==0
if self.not_parallel: self.max_workers=1
if self.not_parallel == False: self.lock = Manager().Lock()
g = partial(f, *args, **kwargs)
if self.not_parallel: return map(g, items)
_g = partial(_call, self.lock, self.pause, self.max_workers, g)
try: return super().map(_g, items, timeout=timeout, chunksize=chunksize)
except Exception as e: self.on_exc(e)
show_doc(ProcessPoolExecutor, title_level=4)
#export
try: from fastprogress import progress_bar
except: progress_bar = None
#export
def parallel(f, items, *args, n_workers=defaults.cpus, total=None, progress=None, pause=0,
threadpool=False, timeout=None, chunksize=1, **kwargs):
"Applies `func` in parallel to `items`, using `n_workers`"
pool = ThreadPoolExecutor if threadpool else ProcessPoolExecutor
with pool(n_workers, pause=pause) as ex:
r = ex.map(f,items, *args, timeout=timeout, chunksize=chunksize, **kwargs)
if progress and progress_bar:
if total is None: total = len(items)
r = progress_bar(r, total=total, leave=False)
return L(r)
#export
def add_one(x, a=1):
# this import is necessary for multiprocessing in notebook on windows
import random
time.sleep(random.random()/80)
return x+a
inp,exp = range(50),range(1,51)
test_eq(parallel(add_one, inp, n_workers=2, progress=False), exp)
test_eq(parallel(add_one, inp, threadpool=True, n_workers=2, progress=False), exp)
test_eq(parallel(add_one, inp, n_workers=1, a=2), range(2,52))
test_eq(parallel(add_one, inp, n_workers=0), exp)
test_eq(parallel(add_one, inp, n_workers=0, a=2), range(2,52))
###Output
_____no_output_____
###Markdown
Use the `pause` parameter to ensure a pause of `pause` seconds between processes starting. This is in case there are race conditions in starting some process, or to stagger the time each process starts, for example when making many requests to a webserver. Set `threadpool=True` to use `ThreadPoolExecutor` instead of `ProcessPoolExecutor`.
###Code
from datetime import datetime
def print_time(i):
time.sleep(random.random()/1000)
print(i, datetime.now())
parallel(print_time, range(5), n_workers=2, pause=0.25);
###Output
0 2021-02-23 06:38:58.778425
1 2021-02-23 06:38:59.028804
2 2021-02-23 06:38:59.280227
3 2021-02-23 06:38:59.530889
4 2021-02-23 06:38:59.781011
###Markdown
Note that `f` should accept a collection of items.
###Code
#export
def run_procs(f, f_done, args):
"Call `f` for each item in `args` in parallel, yielding `f_done`"
processes = L(args).map(Process, args=arg0, target=f)
for o in processes: o.start()
yield from f_done()
processes.map(Self.join())
#export
def _f_pg(obj, queue, batch, start_idx):
for i,b in enumerate(obj(batch)): queue.put((start_idx+i,b))
def _done_pg(queue, items): return (queue.get() for _ in items)
#export
def parallel_gen(cls, items, n_workers=defaults.cpus, **kwargs):
"Instantiate `cls` in `n_workers` procs & call each on a subset of `items` in parallel."
if not parallelable('n_workers', n_workers): n_workers = 0
if n_workers==0:
yield from enumerate(list(cls(**kwargs)(items)))
return
batches = L(chunked(items, n_chunks=n_workers))
idx = L(itertools.accumulate(0 + batches.map(len)))
queue = Queue()
if progress_bar: items = progress_bar(items, leave=False)
f=partial(_f_pg, cls(**kwargs), queue)
done=partial(_done_pg, queue, items)
yield from run_procs(f, done, L(batches,idx).zip())
class _C:
def __call__(self, o): return ((i+1) for i in o)
items = range(5)
res = L(parallel_gen(_C, items, n_workers=0))
idxs,dat1 = zip(*res.sorted(itemgetter(0)))
test_eq(dat1, range(1,6))
res = L(parallel_gen(_C, items, n_workers=3))
idxs,dat2 = zip(*res.sorted(itemgetter(0)))
test_eq(dat2, dat1)
###Output
_____no_output_____
###Markdown
`cls` is any class with `__call__`. It will be passed `args` and `kwargs` when initialized. Note that `n_workers` instances of `cls` are created, one in each process. `items` are then split in `n_workers` batches and one is sent to each `cls`. The function then returns a generator of tuples of item indices and results.
###Code
class TestSleepyBatchFunc:
"For testing parallel processes that run at different speeds"
def __init__(self): self.a=1
def __call__(self, batch):
for k in batch:
time.sleep(random.random()/4)
yield k+self.a
x = np.linspace(0,0.99,20)
res = L(parallel_gen(TestSleepyBatchFunc, x, n_workers=2))
test_eq(res.sorted().itemgot(1), x+1)
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
from subprocess import Popen, PIPE
# test num_workers > 0 in scripts works when python process start method is spawn
process = Popen(["python", "parallel_test.py"], stdout=PIPE)
_, err = process.communicate(timeout=5)
exit_code = process.wait()
test_eq(exit_code, 0)
###Output
_____no_output_____
###Markdown
Parallel> Threading and multiprocessing functions
###Code
#export
def threaded(f):
"Run `f` in a thread, and returns the thread"
@wraps(f)
def _f(*args, **kwargs):
res = Thread(target=f, args=args, kwargs=kwargs)
res.start()
return res
return _f
@threaded
def _1():
time.sleep(0.05)
print("second")
@threaded
def _2():
time.sleep(0.01)
print("first")
_1()
_2()
time.sleep(0.1)
#export
def startthread(f):
"Like `threaded`, but start thread immediately"
threaded(f)()
@startthread
def _():
time.sleep(0.05)
print("second")
@startthread
def _():
time.sleep(0.01)
print("first")
time.sleep(0.1)
#export
def set_num_threads(nt):
"Get numpy (and others) to use `nt` threads"
try: import mkl; mkl.set_num_threads(nt)
except: pass
try: import torch; torch.set_num_threads(nt)
except: pass
os.environ['IPC_ENABLE']='1'
for o in ['OPENBLAS_NUM_THREADS','NUMEXPR_NUM_THREADS','OMP_NUM_THREADS','MKL_NUM_THREADS']:
os.environ[o] = str(nt)
###Output
_____no_output_____
###Markdown
This sets the number of threads consistently for many tools, by:1. Set the following environment variables equal to `nt`: `OPENBLAS_NUM_THREADS`,`NUMEXPR_NUM_THREADS`,`OMP_NUM_THREADS`,`MKL_NUM_THREADS`2. Sets `nt` threads for numpy and pytorch.
###Code
#export
def _call(lock, pause, n, g, item):
l = False
if pause:
try:
l = lock.acquire(timeout=pause*(n+2))
time.sleep(pause)
finally:
if l: lock.release()
return g(item)
#export
def parallelable(param_name, num_workers, f=None):
f_in_main = f == None or sys.modules[f.__module__].__name__ == "__main__"
if sys.platform == "win32" and IN_NOTEBOOK and num_workers > 0 and f_in_main:
print("Due to IPython and Windows limitation, python multiprocessing isn't available now.")
print(f"So `{param_name}` has to be changed to 0 to avoid getting stuck")
return False
return True
#export
class ThreadPoolExecutor(concurrent.futures.ThreadPoolExecutor):
"Same as Python's ThreadPoolExecutor, except can pass `max_workers==0` for serial execution"
def __init__(self, max_workers=defaults.cpus, on_exc=print, pause=0, **kwargs):
if max_workers is None: max_workers=defaults.cpus
store_attr()
self.not_parallel = max_workers==0
if self.not_parallel: max_workers=1
super().__init__(max_workers, **kwargs)
def map(self, f, items, *args, timeout=None, chunksize=1, **kwargs):
if self.not_parallel == False: self.lock = Manager().Lock()
g = partial(f, *args, **kwargs)
if self.not_parallel: return map(g, items)
_g = partial(_call, self.lock, self.pause, self.max_workers, g)
try: return super().map(_g, items, timeout=timeout, chunksize=chunksize)
except Exception as e: self.on_exc(e)
show_doc(ThreadPoolExecutor, title_level=4)
#export
class ProcessPoolExecutor(concurrent.futures.ProcessPoolExecutor):
"Same as Python's ProcessPoolExecutor, except can pass `max_workers==0` for serial execution"
def __init__(self, max_workers=defaults.cpus, on_exc=print, pause=0, **kwargs):
if max_workers is None: max_workers=defaults.cpus
store_attr()
self.not_parallel = max_workers==0
if self.not_parallel: max_workers=1
super().__init__(max_workers, **kwargs)
def map(self, f, items, *args, timeout=None, chunksize=1, **kwargs):
if not parallelable('max_workers', self.max_workers, f): self.max_workers = 0
self.not_parallel = self.max_workers==0
if self.not_parallel: self.max_workers=1
if self.not_parallel == False: self.lock = Manager().Lock()
g = partial(f, *args, **kwargs)
if self.not_parallel: return map(g, items)
_g = partial(_call, self.lock, self.pause, self.max_workers, g)
try: return super().map(_g, items, timeout=timeout, chunksize=chunksize)
except Exception as e: self.on_exc(e)
show_doc(ProcessPoolExecutor, title_level=4)
#export
try: from fastprogress import progress_bar
except: progress_bar = None
#export
def parallel(f, items, *args, n_workers=defaults.cpus, total=None, progress=None, pause=0,
threadpool=False, timeout=None, chunksize=1, **kwargs):
"Applies `func` in parallel to `items`, using `n_workers`"
pool = ThreadPoolExecutor if threadpool else ProcessPoolExecutor
with pool(n_workers, pause=pause) as ex:
r = ex.map(f,items, *args, timeout=timeout, chunksize=chunksize, **kwargs)
if progress and progress_bar:
if total is None: total = len(items)
r = progress_bar(r, total=total, leave=False)
return L(r)
#export
def add_one(x, a=1):
# this import is necessary for multiprocessing in notebook on windows
import random
time.sleep(random.random()/80)
return x+a
inp,exp = range(50),range(1,51)
test_eq(parallel(add_one, inp, n_workers=2, progress=False), exp)
test_eq(parallel(add_one, inp, threadpool=True, n_workers=2, progress=False), exp)
test_eq(parallel(add_one, inp, n_workers=1, a=2), range(2,52))
test_eq(parallel(add_one, inp, n_workers=0), exp)
test_eq(parallel(add_one, inp, n_workers=0, a=2), range(2,52))
###Output
_____no_output_____
###Markdown
Use the `pause` parameter to ensure a pause of `pause` seconds between processes starting. This is in case there are race conditions in starting some process, or to stagger the time each process starts, for example when making many requests to a webserver. Set `threadpool=True` to use `ThreadPoolExecutor` instead of `ProcessPoolExecutor`.
###Code
from datetime import datetime
def print_time(i):
time.sleep(random.random()/1000)
print(i, datetime.now())
parallel(print_time, range(5), n_workers=2, pause=0.25);
def die_sometimes(x):
# if 3<x<6: raise Exception(f"exc: {x}")
return x*2
parallel(die_sometimes, range(8))
#export
def run_procs(f, f_done, args):
"Call `f` for each item in `args` in parallel, yielding `f_done`"
processes = L(args).map(Process, args=arg0, target=f)
for o in processes: o.start()
yield from f_done()
processes.map(Self.join())
#export
def _f_pg(obj, queue, batch, start_idx):
for i,b in enumerate(obj(batch)): queue.put((start_idx+i,b))
def _done_pg(queue, items): return (queue.get() for _ in items)
#export
def parallel_gen(cls, items, n_workers=defaults.cpus, **kwargs):
"Instantiate `cls` in `n_workers` procs & call each on a subset of `items` in parallel."
if not parallelable('n_workers', n_workers): n_workers = 0
if n_workers==0:
yield from enumerate(list(cls(**kwargs)(items)))
return
batches = L(chunked(items, n_chunks=n_workers))
idx = L(itertools.accumulate(0 + batches.map(len)))
queue = Queue()
if progress_bar: items = progress_bar(items, leave=False)
f=partial(_f_pg, cls(**kwargs), queue)
done=partial(_done_pg, queue, items)
yield from run_procs(f, done, L(batches,idx).zip())
class _C:
def __call__(self, o): return ((i+1) for i in o)
items = range(5)
res = L(parallel_gen(_C, items, n_workers=0))
idxs,dat1 = zip(*res.sorted(itemgetter(0)))
test_eq(dat1, range(1,6))
res = L(parallel_gen(_C, items, n_workers=3))
idxs,dat2 = zip(*res.sorted(itemgetter(0)))
test_eq(dat2, dat1)
###Output
_____no_output_____
###Markdown
`cls` is any class with `__call__`. It will be passed `args` and `kwargs` when initialized. Note that `n_workers` instances of `cls` are created, one in each process. `items` are then split in `n_workers` batches and one is sent to each `cls`. The function then returns a generator of tuples of item indices and results.
###Code
class TestSleepyBatchFunc:
"For testing parallel processes that run at different speeds"
def __init__(self): self.a=1
def __call__(self, batch):
for k in batch:
time.sleep(random.random()/4)
yield k+self.a
x = np.linspace(0,0.99,20)
res = L(parallel_gen(TestSleepyBatchFunc, x, n_workers=2))
test_eq(res.sorted().itemgot(1), x+1)
#hide
from subprocess import Popen, PIPE
# test num_workers > 0 in scripts works when python process start method is spawn
process = Popen(["python", "parallel_test.py"], stdout=PIPE)
_, err = process.communicate(timeout=5)
exit_code = process.wait()
test_eq(exit_code, 0)
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_test.ipynb.
Converted 01_basics.ipynb.
Converted 02_foundation.ipynb.
Converted 03_xtras.ipynb.
Converted 03a_parallel.ipynb.
Converted 03b_net.ipynb.
Converted 04_dispatch.ipynb.
Converted 05_transform.ipynb.
Converted 06_docments.ipynb.
Converted 07_meta.ipynb.
Converted 08_script.ipynb.
Converted index.ipynb.
Converted parallel_win.ipynb.
###Markdown
Parallel> Threading and multiprocessing functions
###Code
#export
def threaded(f):
"Run `f` in a thread, and returns the thread"
@wraps(f)
def _f(*args, **kwargs):
res = Thread(target=f, args=args, kwargs=kwargs)
res.start()
return res
return _f
@threaded
def _1():
time.sleep(0.05)
print("second")
@threaded
def _2():
time.sleep(0.01)
print("first")
_1()
_2()
time.sleep(0.1)
#export
def startthread(f):
"Like `threaded`, but start thread immediately"
threaded(f)()
@startthread
def _():
time.sleep(0.05)
print("second")
@startthread
def _():
time.sleep(0.01)
print("first")
time.sleep(0.1)
#export
def set_num_threads(nt):
"Get numpy (and others) to use `nt` threads"
try: import mkl; mkl.set_num_threads(nt)
except: pass
try: import torch; torch.set_num_threads(nt)
except: pass
os.environ['IPC_ENABLE']='1'
for o in ['OPENBLAS_NUM_THREADS','NUMEXPR_NUM_THREADS','OMP_NUM_THREADS','MKL_NUM_THREADS']:
os.environ[o] = str(nt)
###Output
_____no_output_____
###Markdown
This sets the number of threads consistently for many tools, by:1. Set the following environment variables equal to `nt`: `OPENBLAS_NUM_THREADS`,`NUMEXPR_NUM_THREADS`,`OMP_NUM_THREADS`,`MKL_NUM_THREADS`2. Sets `nt` threads for numpy and pytorch.
###Code
#export
def _call(lock, pause, n, g, item):
l = False
if pause:
try:
l = lock.acquire(timeout=pause*(n+2))
time.sleep(pause)
finally:
if l: lock.release()
return g(item)
#export
def check_parallel_num(param_name, num_workers):
if sys.platform == "win32" and IN_NOTEBOOK and num_workers > 0:
print("Due to IPython and Windows limitation, python multiprocessing isn't available now.")
print(f"So `{param_name}` is changed to 0 to avoid getting stuck")
num_workers = 0
return num_workers
#export
class ThreadPoolExecutor(concurrent.futures.ThreadPoolExecutor):
"Same as Python's ThreadPoolExecutor, except can pass `max_workers==0` for serial execution"
def __init__(self, max_workers=defaults.cpus, on_exc=print, pause=0, **kwargs):
if max_workers is None: max_workers=defaults.cpus
store_attr()
self.not_parallel = max_workers==0
if self.not_parallel: max_workers=1
super().__init__(max_workers, **kwargs)
def map(self, f, items, *args, timeout=None, chunksize=1, **kwargs):
if self.not_parallel == False: self.lock = Manager().Lock()
g = partial(f, *args, **kwargs)
if self.not_parallel: return map(g, items)
_g = partial(_call, self.lock, self.pause, self.max_workers, g)
try: return super().map(_g, items, timeout=timeout, chunksize=chunksize)
except Exception as e: self.on_exc(e)
show_doc(ThreadPoolExecutor, title_level=4)
#export
class ProcessPoolExecutor(concurrent.futures.ProcessPoolExecutor):
"Same as Python's ProcessPoolExecutor, except can pass `max_workers==0` for serial execution"
def __init__(self, max_workers=defaults.cpus, on_exc=print, pause=0, **kwargs):
if max_workers is None: max_workers=defaults.cpus
max_workers = check_parallel_num('max_workers', max_workers)
store_attr()
self.not_parallel = max_workers==0
if self.not_parallel: max_workers=1
super().__init__(max_workers, **kwargs)
def map(self, f, items, *args, timeout=None, chunksize=1, **kwargs):
if self.not_parallel == False: self.lock = Manager().Lock()
g = partial(f, *args, **kwargs)
if self.not_parallel: return map(g, items)
_g = partial(_call, self.lock, self.pause, self.max_workers, g)
try: return super().map(_g, items, timeout=timeout, chunksize=chunksize)
except Exception as e: self.on_exc(e)
show_doc(ProcessPoolExecutor, title_level=4)
#export
try: from fastprogress import progress_bar
except: progress_bar = None
#export
def parallel(f, items, *args, n_workers=defaults.cpus, total=None, progress=None, pause=0,
threadpool=False, timeout=None, chunksize=1, **kwargs):
"Applies `func` in parallel to `items`, using `n_workers`"
pool = ThreadPoolExecutor if threadpool else ProcessPoolExecutor
with pool(n_workers, pause=pause) as ex:
r = ex.map(f,items, *args, timeout=timeout, chunksize=chunksize, **kwargs)
if progress and progress_bar:
if total is None: total = len(items)
r = progress_bar(r, total=total, leave=False)
return L(r)
def add_one(x, a=1):
time.sleep(random.random()/80)
return x+a
inp,exp = range(50),range(1,51)
test_eq(parallel(add_one, inp, n_workers=2, progress=False), exp)
test_eq(parallel(add_one, inp, threadpool=True, n_workers=2, progress=False), exp)
test_eq(parallel(add_one, inp, n_workers=1, a=2), range(2,52))
test_eq(parallel(add_one, inp, n_workers=0), exp)
test_eq(parallel(add_one, inp, n_workers=0, a=2), range(2,52))
###Output
_____no_output_____
###Markdown
Use the `pause` parameter to ensure a pause of `pause` seconds between processes starting. This is in case there are race conditions in starting some process, or to stagger the time each process starts, for example when making many requests to a webserver. Set `threadpool=True` to use `ThreadPoolExecutor` instead of `ProcessPoolExecutor`.
###Code
from datetime import datetime
def print_time(i):
time.sleep(random.random()/1000)
print(i, datetime.now())
parallel(print_time, range(5), n_workers=2, pause=0.25);
###Output
0 2021-02-03 09:51:30.561681
1 2021-02-03 09:51:30.812066
2 2021-02-03 09:51:31.063662
3 2021-02-03 09:51:31.313478
4 2021-02-03 09:51:31.564776
###Markdown
Note that `f` should accept a collection of items.
###Code
#export
def run_procs(f, f_done, args):
"Call `f` for each item in `args` in parallel, yielding `f_done`"
processes = L(args).map(Process, args=arg0, target=f)
for o in processes: o.start()
yield from f_done()
processes.map(Self.join())
#export
def _f_pg(obj, queue, batch, start_idx):
for i,b in enumerate(obj(batch)): queue.put((start_idx+i,b))
def _done_pg(queue, items): return (queue.get() for _ in items)
#export
def parallel_gen(cls, items, n_workers=defaults.cpus, **kwargs):
"Instantiate `cls` in `n_workers` procs & call each on a subset of `items` in parallel."
n_workers = check_parallel_num('n_workers', n_workers)
if n_workers==0:
yield from enumerate(list(cls(**kwargs)(items)))
return
batches = L(chunked(items, n_chunks=n_workers))
idx = L(itertools.accumulate(0 + batches.map(len)))
queue = Queue()
if progress_bar: items = progress_bar(items, leave=False)
f=partial(_f_pg, cls(**kwargs), queue)
done=partial(_done_pg, queue, items)
yield from run_procs(f, done, L(batches,idx).zip())
class _C:
def __call__(self, o): return ((i+1) for i in o)
items = range(5)
res = L(parallel_gen(_C, items, n_workers=0))
idxs,dat1 = zip(*res.sorted(itemgetter(0)))
test_eq(dat1, range(1,6))
res = L(parallel_gen(_C, items, n_workers=3))
idxs,dat2 = zip(*res.sorted(itemgetter(0)))
test_eq(dat2, dat1)
###Output
_____no_output_____
###Markdown
`cls` is any class with `__call__`. It will be passed `args` and `kwargs` when initialized. Note that `n_workers` instances of `cls` are created, one in each process. `items` are then split in `n_workers` batches and one is sent to each `cls`. The function then returns a generator of tuples of item indices and results.
###Code
class TestSleepyBatchFunc:
"For testing parallel processes that run at different speeds"
def __init__(self): self.a=1
def __call__(self, batch):
for k in batch:
time.sleep(random.random()/4)
yield k+self.a
x = np.linspace(0,0.99,20)
res = L(parallel_gen(TestSleepyBatchFunc, x, n_workers=2))
test_eq(res.sorted().itemgot(1), x+1)
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
from subprocess import Popen, PIPE
# test num_workers > 0 in scripts works when python process start method is spawn
process = Popen(["python", "parallel_test.py"], stdout=PIPE)
_, err = process.communicate(timeout=5)
exit_code = process.wait()
test_eq(exit_code, 0)
###Output
_____no_output_____ |
Content/code/3. Grav_Mag_modeling/3.4. Prism_modeling/.ipynb_checkpoints/2. 3D_modelagem_mag_prisma-checkpoint.ipynb | ###Markdown
Modelagem magnética 3D de um prisma retangular **[Referências]*** Nagy, D., G. Papp, and J. Benedek (2000), The gravitational potential and its derivatives for the prism: Journal of Geodesy, 74, 552–560, doi: 10.1007/s001900000116. Importando as bibliotecas
###Code
import numpy as np
import matplotlib.pyplot as plt
import prism_mag
###Output
_____no_output_____
###Markdown
Gerando os parâmetros do sistema de coordenadas
###Code
Nx = 100
Ny = 50
area = [-1000.,1000.,-1000.,1000.]
shape = (Nx,Ny)
x = np.linspace(area[0],area[1],num=Nx)
y = np.linspace(area[2],area[3],num=Ny)
yc,xc = np.meshgrid(y,x)
voo = -200.
zc = voo*np.ones_like(xc)
coordenadas = np.array([yc.ravel(),xc.ravel(),zc.ravel()])
###Output
_____no_output_____
###Markdown
Gerando os parâmetros do prisma
###Code
intensidades = np.array([50.])
direcoes = np.array([[-50.,-20.]])
modelo = np.array([[-50,50,-450,450,50,250]])
###Output
_____no_output_____
###Markdown
Cálculo das componentes do campo de gravidade e do potencial
###Code
bz = prism_mag.magnetic(coordenadas,modelo,intensidades,direcoes,field="b_z")
bx = prism_mag.magnetic(coordenadas,modelo,intensidades,direcoes,field="b_x")
by = prism_mag.magnetic(coordenadas,modelo,intensidades,direcoes,field="b_y")
###Output
_____no_output_____
###Markdown
Anomalia de campo total aproximada
###Code
I0,D0 = -20.,-20.
j0x = np.cos(np.deg2rad(I0))*np.cos(np.deg2rad(D0))
j0y = np.cos(np.deg2rad(I0))*np.sin(np.deg2rad(D0))
j0z = np.sin(np.deg2rad(I0))
tfa = j0x*bx + j0y*by + j0z*bz
###Output
_____no_output_____
###Markdown
Visualização dos dados calculados
###Code
title_font = 18
bottom_font = 15
plt.close('all')
plt.figure(figsize=(10,10), tight_layout=True)
plt.subplot(2,2,1)
plt.xlabel('y (m)', fontsize = title_font)
plt.ylabel('x (m)', fontsize = title_font)
plt.title('Bx (nT)', fontsize=title_font)
plt.pcolor(yc,xc,bx.reshape(shape),shading='auto',cmap='jet')
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
cb = plt.colorbar(pad=0.01, aspect=40, shrink=1.0)
cb.ax.tick_params(labelsize=bottom_font)
plt.subplot(2,2,2)
plt.xlabel('y (m)', fontsize = title_font)
plt.ylabel('x (m)', fontsize = title_font)
plt.title('By (nT)', fontsize=title_font)
plt.pcolor(yc,xc,by.reshape(shape),shading='auto',cmap='jet')
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
cb = plt.colorbar(pad=0.01, aspect=40, shrink=1.0)
cb.ax.tick_params(labelsize=bottom_font)
plt.subplot(2,2,3)
plt.xlabel('y (m)', fontsize = title_font)
plt.ylabel('x (m)', fontsize = title_font)
plt.title('Bz (nT)', fontsize=title_font)
plt.pcolor(yc,xc,bz.reshape(shape),shading='auto',cmap='jet')
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
cb = plt.colorbar(pad=0.01, aspect=40, shrink=1.0)
cb.ax.tick_params(labelsize=bottom_font)
plt.subplot(2,2,4)
plt.xlabel('y (m)', fontsize = title_font)
plt.ylabel('x (m)', fontsize = title_font)
plt.title('TFA (nT)', fontsize=title_font)
plt.pcolor(yc,xc,tfa.reshape(shape),shading='auto',cmap='jet')
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
cb = plt.colorbar(pad=0.01, aspect=40, shrink=1.0)
cb.ax.tick_params(labelsize=bottom_font)
file_name = 'images/forward_modeling_prism_mag_tot_HS'
plt.savefig(file_name+'.png',dpi=300)
plt.show()
###Output
_____no_output_____ |
boston-house-pricing-linear-regression.ipynb | ###Markdown
Compare Custom SGD with Sklearn SGD
###Code
# Sklearn SGD
# The mean squared error
print("Mean squared error: %.2f" % mean_squared_error(Y_test, Y_pred))
# Explained variance score: 1 is perfect prediction
print("Variance score: %.2f" % r2_score(Y_test, Y_pred))
# The mean absolute error
print("Mean Absolute Error: %.2f" % mean_absolute_error(Y_test, Y_pred))
# Implemented SGD
# The mean squared error
error = cost_function(optimal_b, optimal_w, np.asmatrix(x_test), np.asmatrix(y_test))
print("Mean squared error: %.2f" % (error))
# Explained variance score : 1 is perfect prediction
r_squared = r_sq_score(optimal_b, optimal_w, np.asmatrix(x_test), np.asmatrix(y_test))
print("Variance score: %.2f" % r_squared)
absolute_error = absolute_cost_function(optimal_b, optimal_w, np.asmatrix(x_test), np.asmatrix(y_test))
print("Mean Absolute Error: %.2f" % absolute_error)
# Scatter plot of test vs predicted
# sklearn SGD
plt.figure(1)
plt.subplot(211)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Prices: $Y_i$")
plt.ylabel("Predicted prices: $\hat{Y}_i$")
plt.title("Prices vs Predicted prices: Sklearn SGD")
plt.show()
# Implemented SGD
plt.subplot(212)
plt.scatter([y_test], [(np.dot(np.asmatrix(x_test), optimal_w) + optimal_b)])
plt.xlabel("Prices: $Y_i$")
plt.ylabel("Predicted prices: $\hat{Y}_i$")
plt.title("Prices vs Predicted prices: Implemented SGD")
plt.show()
# Distribution of error
delta_y_im = np.asmatrix(y_test) - (np.dot(np.asmatrix(x_test), optimal_w) + optimal_b)
delta_y_sk = Y_test - Y_pred
import seaborn as sns;
import numpy as np;
sns.set_style('whitegrid')
sns.kdeplot(np.asarray(delta_y_im)[0], label = "Implemented SGD", bw = 0.5)
sns.kdeplot(np.array(delta_y_sk), label = "Sklearn SGD", bw = 0.5)
plt.title("Distribution of error: $y_i$ - $\hat{y}_i$")
plt.xlabel("Error")
plt.ylabel("Density")
plt.legend()
plt.show()
# Distribution of predicted value
sns.set_style('whitegrid')
sns.kdeplot(np.array(np.dot(np.asmatrix(x_test), optimal_w) + optimal_b).T[0], label = "Implemented SGD")
sns.kdeplot(Y_pred, label = "Sklearn SGD")
plt.title("Distribution of prediction $\hat{y}_i$")
plt.xlabel("predicted values")
plt.ylabel("Density")
plt.show()
from prettytable import PrettyTable
# MSE = mean squared error
# MAE =mean absolute error
x=PrettyTable()#np.asmatrix(x_test),
x.field_names=['Model','Weight Vector','MSE','MAE', 'Variance Score']
x.add_row(['sklearn',sklearn_w,mean_squared_error(Y_test, clf_.predict(X_test)),mean_absolute_error(Y_test, clf_.predict(X_test)),r2_score(Y_test, Y_pred)])
x.add_row(['custom',optimal_w,error,absolute_error,r_squared])
print(x)
sklearn_pred=clf_.predict(x_test)
implemented_pred=(np.dot(np.asmatrix(x_test), optimal_w) + optimal_b)
x=PrettyTable()
x.field_names=['SKLearn SGD predicted value','Implemented SGD predicted value']
for itr in range(15):
x.add_row([sklearn_pred[itr],implemented_pred[itr]])
print(x)
###Output
+-----------------------------+---------------------------------+
| SKLearn SGD predicted value | Implemented SGD predicted value |
+-----------------------------+---------------------------------+
| 11.010976872064473 | [[9.34267897]] |
| 28.13265575430431 | [[21.81915391]] |
| 32.610429206840855 | [[27.97043084]] |
| 19.47265691695546 | [[22.43740506]] |
| 26.99547481859689 | [[20.51530473]] |
| 18.17885314254281 | [[15.23322088]] |
| 6.450183867637406 | [[10.74021524]] |
| 25.429866825378358 | [[23.82642385]] |
| 21.60484164577307 | [[19.48934147]] |
| 24.084364627932437 | [[21.50028139]] |
| 6.151923708168887 | [[7.03308681]] |
| 27.77673286644099 | [[21.02386188]] |
| 10.057460526020344 | [[7.98370263]] |
| 15.644363660603457 | [[16.73402384]] |
| 23.502153086425825 | [[21.90299932]] |
+-----------------------------+---------------------------------+
|
NARX_weather.ipynb | ###Markdown
###Code
# https://sysidentpy.org/
!pip install sysidentpy
!pip install matplotlib==3.1.3
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sysidentpy.metrics import mean_squared_error
from sysidentpy.utils.generate_data import get_siso_data
# Generate a dataset of a simulated dynamical system
x_train, x_valid, y_train, y_valid = get_siso_data(
n=1000,
colored_noise=False,
sigma=0.001,
train_percentage=80
)
print(np.shape(x_train))
# Polynomial NARX
from sysidentpy.model_structure_selection import FROLS
from sysidentpy.basis_function._basis_function import Polynomial
from sysidentpy.utils.display_results import results
from sysidentpy.utils.plotting import plot_residues_correlation, plot_results
from sysidentpy.residues.residues_correlation import compute_residues_autocorrelation, compute_cross_correlation
from sysidentpy.metrics._regression import root_relative_squared_error
basis_function = Polynomial(degree=3)
model = FROLS(
order_selection=True,
n_info_values=10,
extended_least_squares=False,
ylag=2,
xlag=2,
info_criteria='aic',
estimator='least_squares',
basis_function=basis_function
)
model.fit(X=x_train, y=y_train)
yhat = model.predict(X=x_valid, y=y_valid)
rrse = root_relative_squared_error(y_valid, yhat)
print(rrse)
r = pd.DataFrame(
results(
model.final_model, model.theta, model.err,
model.n_terms, err_precision=8, dtype='sci'
),
columns=['Regressors', 'Parameters', 'ERR'])
print(r)
#!python -m pip uninstall matplotlib
#!pip install matplotlib==3.1.3
plot_results(y=y_valid, yhat=yhat, n=1000)
ee = compute_residues_autocorrelation(y_valid, yhat)
plot_residues_correlation(data=ee, title="Residues", ylabel="$e^2$")
#x1e = compute_cross_correlation(y_valid, yhat, x2_val)
#plot_residues_correlation(data=x1e, title="Residues", ylabel="$x_1e$")
from torch import nn
from sysidentpy.neural_network import NARXNN
class NARX(nn.Module):
def __init__(self):
super().__init__()
self.lin = nn.Linear(4, 10)
self.lin2 = nn.Linear(10, 10)
self.lin3 = nn.Linear(10, 1)
self.tanh = nn.Tanh()
def forward(self, xb):
z = self.lin(xb)
z = self.tanh(z)
z = self.lin2(z)
z = self.tanh(z)
z = self.lin3(z)
return z
narx_net = NARXNN(
net=NARX(),
ylag=2,
xlag=2,
loss_func='mse_loss',
optimizer='Adam',
epochs=200,
verbose=False,
optim_params={'betas': (0.9, 0.999), 'eps': 1e-05} # optional parameters of the optimizer
)
train_dl = narx_net.data_transform(x_train, y_train)
valid_dl = narx_net.data_transform(x_valid, y_valid)
narx_net.fit(train_dl, valid_dl)
yhat = narx_net.predict(x_valid, y_valid)
ee, ex, extras, lam = narx_net.residuals(x_valid, y_valid, yhat)
narx_net.plot_result(y_valid, yhat, ee, ex)
###Output
/usr/local/lib/python3.7/dist-packages/sysidentpy/utils/deprecation.py:37: FutureWarning: Function __init__ has been deprecated since v0.1.7.
Use NARXNN(ylag=2, xlag=2, basis_function='Some basis function') instead.This module was deprecated in favor of NARXNN(ylag=2, xlag=2, basis_function='Some basis function') module into which all the refactored classes and functions are moved.
This feature will be removed in version v0.2.0.
warnings.warn(message, FutureWarning)
/usr/local/lib/python3.7/dist-packages/sysidentpy/utils/deprecation.py:37: FutureWarning: Function residuals has been deprecated since v0.1.7.
Use from sysidentpy.residues_correlation import compute_cross_correlation, compute_residues_autocorrelation instead.This module was deprecated in favor of from sysidentpy.residues_correlation import compute_cross_correlation, compute_residues_autocorrelation module into which all the refactored classes and functions are moved.
This feature will be removed in version v0.2.0.
warnings.warn(message, FutureWarning)
|
1_3_Types_of_Features_Image_Segmentation/3. K-means.ipynb | ###Markdown
K-means Clustering Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
## TODO: Check out the images directory to see other images you can work with
# And select one!
image = cv2.imread('images/monarch.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Prepare data for k-means
###Code
# Reshape image into a 2D array of pixels and 3 color values (RGB)
pixel_vals = image.reshape((-1,3))
# Convert to float type
pixel_vals = np.float32(pixel_vals)
###Output
_____no_output_____
###Markdown
Implement k-means clustering
###Code
# define stopping criteria
# you can change the number of max iterations for faster convergence!
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.2)
## TODO: Select a value for k
# then perform k-means clustering
k = 3
retval, labels, centers = cv2.kmeans(pixel_vals, k, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# convert data into 8-bit values
centers = np.uint8(centers)
segmented_data = centers[labels.flatten()]
# reshape data into the original image dimensions
segmented_image = segmented_data.reshape((image.shape))
labels_reshape = labels.reshape(image.shape[0], image.shape[1])
plt.imshow(segmented_image)
cv2.kmeans?
## TODO: Visualize one segment, try to find which is the leaves, background, etc!
plt.imshow(labels_reshape==1, cmap='gray')
# mask an image segment by cluster
cluster = 0 # the first cluster
masked_image = np.copy(image)
# turn the mask green!
masked_image[labels_reshape == cluster] = [0, 255, 0]
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))
ax1.imshow(image)
ax2.imshow(masked_image)
###Output
_____no_output_____
###Markdown
K-means Clustering Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
## TODO: Check out the images directory to see other images you can work with
# And select one!
image = cv2.imread('images/monarch.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Prepare data for k-means
###Code
# Reshape image into a 2D array of pixels and 3 color values (RGB)
pixel_vals = image.reshape((-1,3))
# Convert to float type
pixel_vals = np.float32(pixel_vals)
###Output
_____no_output_____
###Markdown
Implement k-means clustering
###Code
# define stopping criteria
# you can change the number of max iterations for faster convergence!
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.2)
## TODO: Select a value for k
# then perform k-means clustering
k = 3
retval, labels, centers = cv2.kmeans(pixel_vals, k, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# convert data into 8-bit values
centers = np.uint8(centers)
segmented_data = centers[labels.flatten()]
# reshape data into the original image dimensions
segmented_image = segmented_data.reshape((image.shape))
labels_reshape = labels.reshape(image.shape[0], image.shape[1])
plt.imshow(segmented_image)
## TODO: Visualize one segment, try to find which is the leaves, background, etc!
plt.imshow(labels_reshape==0, cmap='gray')
# mask an image segment by cluster
cluster = 0 # the first cluster
masked_image = np.copy(image)
# turn the mask green!
masked_image[labels_reshape == cluster] = [0, 255, 0]
plt.imshow(masked_image)
###Output
_____no_output_____
###Markdown
K-means Clustering Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
## TODO: Check out the images directory to see other images you can work with
# And select one!
image = cv2.imread('images/monarch.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
print(image.shape)
###Output
(2000, 3008, 3)
###Markdown
Prepare data for k-means
###Code
# Reshape image into a 2D array of pixels and 3 color values (RGB)
pixel_vals = image.reshape((-1,3))
print(pixel_vals)
# Convert to float type
pixel_vals = np.float32(pixel_vals)
###Output
[[33 66 23]
[33 66 23]
[33 66 23]
...
[23 44 11]
[24 43 11]
[24 43 11]]
###Markdown
Implement k-means clustering
###Code
# define stopping criteria
# you can change the number of max iterations for faster convergence!
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.2)
#EPS = epsilon =0.2 if mean taken moves the center by less than this value indicate stop of kmeans
## TODO: Select a value for k
# then perform k-means clustering
k = 3
retval, labels, centers = cv2.kmeans(pixel_vals, k, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
#None = no labels
#10 = no.of iteration
#we assign cluster centre randomly
print(labels)
# convert data into 8-bit values
centers = np.uint8(centers)
segmented_data = centers[labels.flatten()]
# reshape data into the original image dimensions
segmented_image = segmented_data.reshape((image.shape))
labels_reshape = labels.reshape(image.shape[0], image.shape[1])
plt.imshow(segmented_image)
print(segmented_data)
print(centers)
print(labels_reshape)
## TODO: Visualize one segment, try to find which is the leaves, background, etc!
plt.imshow(labels_reshape==2, cmap='gray') #3 cluster 0,1,2
# mask an image segment by cluster
cluster = 2 # the first cluster
masked_image = np.copy(image)
# turn the mask green!
masked_image[labels_reshape == cluster] = [0, 255, 0]
plt.imshow(masked_image)
###Output
_____no_output_____
###Markdown
K-means Clustering Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
## TODO: Check out the images directory to see other images you can work with
# And select one!
image = cv2.imread('images/monarch.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Prepare data for k-means
###Code
# Reshape image into a 2D array of pixels and 3 color values (RGB)
image_copy = np.copy(image)
print(image_copy.shape)
pixel_vals = image_copy.reshape((-1,3))
print(pixel_vals.shape)
# Convert to float type
pixel_vals = np.float32(pixel_vals)
###Output
(2000, 3008, 3)
(6016000, 3)
###Markdown
Implement k-means clustering
###Code
# define stopping criteria
# you can change the number of max iterations for faster convergence!
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.2)
## TODO: Select a value for k
# then perform k-means clustering
k = 3
retval, labels, centers = cv2.kmeans(pixel_vals, k, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# convert data into 8-bit values
centers = np.uint8(centers)
segmented_data = centers[labels.flatten()]
# reshape data into the original image dimensions
segmented_image = segmented_data.reshape((image.shape))
labels_reshape = labels.reshape(image.shape[0], image.shape[1])
plt.imshow(segmented_image)
segmented_data
labels.flatten()
## TODO: Visualize one segment, try to find which is the leaves, background, etc!
plt.imshow(labels_reshape==0, cmap='gray')
# mask an image segment by cluster
cluster = 0 # the first cluster
masked_image = np.copy(image)
# turn the mask green!
masked_image[labels_reshape == cluster] = [0, 255, 0]
plt.imshow(masked_image)
###Output
_____no_output_____
###Markdown
K-means Clustering Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
## TODO: Check out the images directory to see other images you can work with
# And select one!
image = cv2.imread('images/monarch.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Prepare data for k-means
###Code
# Reshape image into a 2D array of pixels and 3 color values (RGB)
pixel_vals = image.reshape((-1,3))
# Convert to float type
pixel_vals = np.float32(pixel_vals)
###Output
_____no_output_____
###Markdown
Implement k-means clustering
###Code
# define stopping criteria
# you can change the number of max iterations for faster convergence!
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 40, 1.0)
## TODO: Select a value for k
# then perform k-means clustering
k = 3
retval, labels, centers = cv2.kmeans(pixel_vals, k, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# convert data into 8-bit values
centers = np.uint8(centers)
segmented_data = centers[labels.flatten()]
# reshape data into the original image dimensions
segmented_image = segmented_data.reshape((image.shape))
labels_reshape = labels.reshape(image.shape[0], image.shape[1])
plt.imshow(segmented_image)
## TODO: Visualize one segment, try to find which is the leaves, background, etc!
plt.imshow(labels_reshape==2, cmap='gray')
# mask an image segment by cluster
cluster = 2 # the first cluster
masked_image = np.copy(image)
# turn the mask green!
masked_image[labels_reshape == cluster] = [0, 0, 0]
plt.imshow(masked_image)
###Output
_____no_output_____
###Markdown
K-means Clustering Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
## TODO: Check out the images directory to see other images you can work with
# And select one!
image = cv2.imread('images/pancakes.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
image.shape
###Output
_____no_output_____
###Markdown
Prepare data for k-means
###Code
# Reshape image into a 2D array of pixels and 3 color values (RGB)
pixel_vals = image.reshape((-1,3))
# Convert to float type
pixel_vals = np.float32(pixel_vals)
pixel_vals.shape
###Output
_____no_output_____
###Markdown
Implement k-means clustering
###Code
# define stopping criteria
# you can change the number of max iterations for faster convergence!
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.2)
## TODO: Select a value for k
# then perform k-means clustering
k = 3
retval, labels, centers = cv2.kmeans(pixel_vals, k, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# convert data into 8-bit values
centers = np.uint8(centers)
segmented_data = centers[labels.flatten()]
# reshape data into the original image dimensions
segmented_image = segmented_data.reshape((image.shape))
labels_reshape = labels.reshape(image.shape[0], image.shape[1])
plt.imshow(segmented_image)
## TODO: Visualize one segment, try to find which is the leaves, background, etc!
plt.imshow(labels_reshape==0, cmap='gray')
# mask an image segment by cluster
cluster = 0 # the first cluster
masked_image = np.copy(image)
# turn the mask green!
masked_image[labels_reshape == cluster] = [0, 255, 0]
plt.imshow(masked_image)
###Output
_____no_output_____
###Markdown
K-means Clustering Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
## TODO: Check out the images directory to see other images you can work with
# And select one!
image = cv2.imread('images/monarch.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Prepare data for k-means
###Code
# Reshape image into a 2D array of pixels and 3 color values (RGB)
pixel_vals = image.reshape((-1,3))
# Convert to float type
pixel_vals = np.float32(pixel_vals)
pixel_vals?
###Output
_____no_output_____
###Markdown
Implement k-means clustering kmeans referencehttps://docs.opencv.org/3.0-beta/doc/py_tutorials/py_ml/py_kmeans/py_kmeans_opencv/py_kmeans_opencv.html
###Code
# define stopping criteria
# you can change the number of max iterations for faster convergence!
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.2)
## TODO: Select a value for k
# then perform k-means clustering
k = 3
retval, labels, centers = cv2.kmeans(pixel_vals, k, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# convert data into 8-bit values
centers = np.uint8(centers)
segmented_data = centers[labels.flatten()]
# reshape data into the original image dimensions
segmented_image = segmented_data.reshape((image.shape))
labels_reshape = labels.reshape(image.shape[0], image.shape[1])
plt.imshow(segmented_image)
image.shape[0] * image.shape[1]
labels?
centers
## TODO: Visualize one segment, try to find which is the leaves, background, etc!
plt.imshow(labels_reshape==0, cmap='gray')
# mask an image segment by cluster
cluster = 0 # the first cluster
masked_image = np.copy(image)
# turn the mask green!
masked_image[labels_reshape == cluster] = [0, 255, 0]
plt.imshow(masked_image)
###Output
_____no_output_____
###Markdown
K-means Clustering Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
## TODO: Check out the images directory to see other images you can work with
# And select one!
image = cv2.imread('images/monarch.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Prepare data for k-means
###Code
# Reshape image into a 2D array of pixels and 3 color values (RGB)
pixel_vals = image.reshape((-1,3))
# Convert to float type
pixel_vals = np.float32(pixel_vals)
###Output
_____no_output_____
###Markdown
Implement k-means clustering
###Code
# define stopping criteria
# you can change the number of max iterations for faster convergence!
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.2)
## TODO: Select a value for k
# then perform k-means clustering
k = 5
retval, labels, centers = cv2.kmeans(pixel_vals, k, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# convert data into 8-bit values
centers = np.uint8(centers)
segmented_data = centers[labels.flatten()]
# reshape data into the original image dimensions
segmented_image = segmented_data.reshape((image.shape))
labels_reshape = labels.reshape(image.shape[0], image.shape[1])
plt.imshow(segmented_image)
## TODO: Visualize one segment, try to find which is the leaves, background, etc!
plt.imshow(labels_reshape==1, cmap='gray')
# mask an image segment by cluster
cluster = 0 # the first cluster
masked_image = np.copy(image)
# turn the mask green!
masked_image[labels_reshape == cluster] = [0, 255, 0]
plt.imshow(masked_image)
###Output
_____no_output_____
###Markdown
K-means Clustering Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
## TODO: Check out the images directory to see other images you can work with
# And select one!
image = cv2.imread('images/monarch.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Prepare data for k-means
###Code
# Reshape image into a 2D array of pixels and 3 color values (RGB)
pixel_vals = image.reshape((-1,3))
# Convert to float type
pixel_vals = np.float32(pixel_vals)
###Output
_____no_output_____
###Markdown
Implement k-means clustering
###Code
# define stopping criteria
# you can change the number of max iterations for faster convergence!
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.2)
## TODO: Select a value for k
# then perform k-means clustering
k = 3
retval, labels, centers = cv2.kmeans(pixel_vals, k, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# convert data into 8-bit values
centers = np.uint8(centers)
segmented_data = centers[labels.flatten()]
# reshape data into the original image dimensions
segmented_image = segmented_data.reshape((image.shape))
labels_reshape = labels.reshape(image.shape[0], image.shape[1])
plt.imshow(segmented_image)
## TODO: Visualize one segment, try to find which is the leaves, background, etc!
plt.imshow(labels_reshape==0, cmap='gray')
# mask an image segment by cluster
cluster = 0 # the first cluster
masked_image = np.copy(image)
# turn the mask green!
masked_image[labels_reshape == cluster] = [0, 255, 0]
plt.imshow(masked_image)
###Output
_____no_output_____
###Markdown
K-means Clustering Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
## TODO: Check out the images directory to see other images you can work with
# And select one!
image = cv2.imread('images/monarch.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Prepare data for k-means
###Code
# Reshape image into a 2D array of pixels and 3 color values (RGB)
pixel_vals = image.reshape((-1,3))
# Convert to float type
pixel_vals = np.float32(pixel_vals)
###Output
_____no_output_____
###Markdown
Implement k-means clustering
###Code
# define stopping criteria
# you can change the number of max iterations for faster convergence!
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.2)
## TODO: Select a value for k
# then perform k-means clustering
k = 9
retval, labels, centers = cv2.kmeans(pixel_vals, k, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# convert data into 8-bit values
centers = np.uint8(centers)
segmented_data = centers[labels.flatten()]
# reshape data into the original image dimensions
segmented_image = segmented_data.reshape((image.shape))
labels_reshape = labels.reshape(image.shape[0], image.shape[1])
plt.imshow(segmented_image)
## TODO: Visualize one segment, try to find which is the leaves, background, etc!
plt.imshow(labels_reshape==8, cmap='gray')
# mask an image segment by cluster
cluster = 3 # the first cluster
masked_image = np.copy(image)
# turn the mask green!
masked_image[labels_reshape == cluster] = [255, 0, 0]
plt.imshow(masked_image)
###Output
_____no_output_____
###Markdown
K-means Clustering Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
## TODO: Check out the images directory to see other images you can work with
# And select one!
image = cv2.imread('images/pancakes.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Prepare data for k-means
###Code
# Reshape image into a 2D array of pixels and 3 color values (RGB)
pixel_vals = image.reshape((-1,3))
# Convert to float type
pixel_vals = np.float32(pixel_vals)
###Output
_____no_output_____
###Markdown
Implement k-means clustering
###Code
# define stopping criteria
# you can change the number of max iterations for faster convergence!
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 0.2)
## TODO: Select a value for k
# then perform k-means clustering
k = 3
retval, labels, centers = cv2.kmeans(pixel_vals, k, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# convert data into 8-bit values
centers = np.uint8(centers)
segmented_data = centers[labels.flatten()]
# reshape data into the original image dimensions
segmented_image = segmented_data.reshape((image.shape))
labels_reshape = labels.reshape(image.shape[0], image.shape[1])
plt.imshow(segmented_image)
## TODO: Visualize one segment, try to find which is the leaves, background, etc!
plt.imshow(labels_reshape==2, cmap='gray')
plt.imshow(labels_reshape==1, cmap='gray')
# mask an image segment by cluster
cluster = 0 # the first cluster
masked_image = np.copy(image)
# turn the mask red!
masked_image[labels_reshape == cluster] = [255, 0, 0]
plt.imshow(masked_image)
###Output
_____no_output_____
###Markdown
K-means Clustering Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
## TODO: Check out the images directory to see other images you can work with
# And select one!
image = cv2.imread('images/monarch.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Prepare data for k-means
###Code
# Reshape image into a 2D array of pixels and 3 color values (RGB)
pixel_vals = image.reshape((-1,3))
# Convert to float type
pixel_vals = np.float32(pixel_vals)
###Output
_____no_output_____
###Markdown
Implement k-means clustering
###Code
# define stopping criteria
# you can change the number of max iterations for faster convergence!
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.2)
## TODO: Select a value for k
# then perform k-means clustering
k = 3
retval, labels, centers = cv2.kmeans(pixel_vals, k, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# convert data into 8-bit values
centers = np.uint8(centers)
segmented_data = centers[labels.flatten()]
# reshape data into the original image dimensions
segmented_image = segmented_data.reshape((image.shape))
labels_reshape = labels.reshape(image.shape[0], image.shape[1])
plt.imshow(segmented_image)
## TODO: Visualize one segment, try to find which is the leaves, background, etc!
plt.imshow(labels_reshape==0, cmap='gray')
# mask an image segment by cluster
cluster = 0 # the first cluster
masked_image = np.copy(image)
# turn the mask green!
masked_image[labels_reshape == cluster] = [0, 255, 0]
plt.imshow(masked_image)
###Output
_____no_output_____
###Markdown
K-means Clustering Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
## TODO: Check out the images directory to see other images you can work with
# And select one!
image = cv2.imread('images/monarch.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Prepare data for k-means
###Code
# Reshape image into a 2D array of pixels and 3 color values (RGB)
pixel_vals = image.reshape((-1,3))
# Convert to float type
pixel_vals = np.float32(pixel_vals)
###Output
_____no_output_____
###Markdown
Implement k-means clustering
###Code
# define stopping criteria
# you can change the number of max iterations for faster convergence!
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.2)
## TODO: Select a value for k
# then perform k-means clustering
k = 3
retval, labels, centers = cv2.kmeans(pixel_vals, k, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# convert data into 8-bit values
centers = np.uint8(centers)
segmented_data = centers[labels.flatten()]
# reshape data into the original image dimensions
segmented_image = segmented_data.reshape((image.shape))
labels_reshape = labels.reshape(image.shape[0], image.shape[1])
plt.imshow(segmented_image)
## TODO: Visualize one segment, try to find which is the leaves, background, etc!
plt.imshow(labels_reshape==0, cmap='gray')
# mask an image segment by cluster
cluster = 0 # the first cluster
masked_image = np.copy(image)
# turn the mask green!
masked_image[labels_reshape == cluster] = [0, 255, 0]
plt.imshow(masked_image)
###Output
_____no_output_____
###Markdown
K-means Clustering Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
## TODO: Check out the images directory to see other images you can work with
# And select one!
image = cv2.imread('images/monarch.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Prepare data for k-means
###Code
# Reshape image into a 2D array of pixels and 3 color values (RGB)
pixel_vals = image.reshape((-1,3))
# Convert to float type
pixel_vals = np.float32(pixel_vals)
###Output
_____no_output_____
###Markdown
Implement k-means clustering
###Code
# define stopping criteria
# you can change the number of max iterations for faster convergence!
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.2)
## TODO: Select a value for k
# then perform k-means clustering
k = 3
retval, labels, centers = cv2.kmeans(pixel_vals, k, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# convert data into 8-bit values
centers = np.uint8(centers)
segmented_data = centers[labels.flatten()]
# reshape data into the original image dimensions
segmented_image = segmented_data.reshape((image.shape))
labels_reshape = labels.reshape(image.shape[0], image.shape[1])
plt.imshow(segmented_image)
## TODO: Visualize one segment, try to find which is the leaves, background, etc!
plt.imshow(labels_reshape==0, cmap='gray')
# mask an image segment by cluster
cluster = 0 # the first cluster
masked_image = np.copy(image)
# turn the mask green!
masked_image[labels_reshape == cluster] = [255, 0, 0]
plt.imshow(masked_image)
###Output
_____no_output_____
###Markdown
K-means Clustering Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
## TODO: Check out the images directory to see other images you can work with
# And select one!
image = cv2.imread('images/monarch.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Prepare data for k-means
###Code
# Reshape image into a 2D array of pixels and 3 color values (RGB)
# input (# of pixels, # of color channels)
pixel_vals = image.reshape((-1,3))
# Convert to float type - for kmeans
pixel_vals = np.float32(pixel_vals)
###Output
_____no_output_____
###Markdown
Implement k-means clustering
###Code
# define stopping criteria
# Input (100 is max # iterations, 0.2 is amount the center must move to iterate again)
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.2)
# Select a value for k then perform k-means clustering
# Input (converted pixel values, k, labels, stop criteria, # of attempts, how we coose initial center points)
k = 3
retval, labels, centers = cv2.kmeans(pixel_vals, k, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# convert data back into 8-bit values
centers = np.uint8(centers)
segmented_data = centers[labels.flatten()]
# reshape data back into the original image dimensions
segmented_image = segmented_data.reshape((image.shape))
labels_reshape = labels.reshape(image.shape[0], image.shape[1])
plt.imshow(segmented_image)
## TODO: Visualize one segment, try to find which is the leaves, background, etc!
# Shows the labels equal to 1
plt.imshow(labels_reshape==1, cmap='gray')
# mask an image segment by cluster
cluster = 0 # the first cluster
masked_image = np.copy(image)
# turn the mask green!
masked_image[labels_reshape == cluster] = [0, 255, 0]
plt.imshow(masked_image)
###Output
_____no_output_____
###Markdown
K-means Clustering Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
## TODO: Check out the images directory to see other images you can work with
# And select one!
image = cv2.imread('images/monarch.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Prepare data for k-means
###Code
# Reshape image into a 2D array of pixels and 3 color values (RGB)
pixel_vals = image.reshape((-1,3))
# Convert to float type
pixel_vals = np.float32(pixel_vals)
###Output
_____no_output_____
###Markdown
Implement k-means clustering
###Code
# define stopping criteria
# you can change the number of max iterations for faster convergence!
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.2)
## TODO: Select a value for k
# then perform k-means clustering
k = 3
retval, labels, centers = cv2.kmeans(pixel_vals, k, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# convert data into 8-bit values
centers = np.uint8(centers)
segmented_data = centers[labels.flatten()]
# reshape data into the original image dimensions
segmented_image = segmented_data.reshape((image.shape))
labels_reshape = labels.reshape(image.shape[0], image.shape[1])
plt.imshow(segmented_image)
## TODO: Visualize one segment, try to find which is the leaves, background, etc!
plt.imshow(labels_reshape==2, cmap='gray')
# mask an image segment by cluster
cluster = 0 # the first cluster
masked_image = np.copy(image)
# turn the mask green!
masked_image[labels_reshape == cluster] = [0, 255, 0]
plt.imshow(masked_image)
###Output
_____no_output_____
###Markdown
K-means Clustering Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
## TODO: Check out the images directory to see other images you can work with
# And select one!
image = cv2.imread('images/orange2.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Prepare data for k-means
###Code
# Reshape image into a 2D array of pixels and 3 color values (RGB)
pixel_vals = image.reshape((-1,3))
# Convert to float type
pixel_vals = np.float32(pixel_vals)
###Output
_____no_output_____
###Markdown
Implement k-means clustering
###Code
# define stopping criteria
# you can change the number of max iterations for faster convergence!
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.2)
## TODO: Select a value for k
# then perform k-means clustering
k = 3
retval, labels, centers = cv2.kmeans(pixel_vals, k, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# convert data into 8-bit values
centers = np.uint8(centers)
segmented_data = centers[labels.flatten()]
# reshape data into the original image dimensions
segmented_image = segmented_data.reshape((image.shape))
labels_reshape = labels.reshape(image.shape[0], image.shape[1])
plt.imshow(segmented_image)
## TODO: Visualize one segment, try to find which is the leaves, background, etc!
plt.imshow(labels_reshape==0, cmap='gray')
# mask an image segment by cluster
cluster = 1 # the first cluster
masked_image = np.copy(image)
# turn the mask green!
masked_image[labels_reshape == cluster] = [0, 255, 0]
plt.imshow(masked_image)
###Output
_____no_output_____
###Markdown
K-means Clustering Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
## TODO: Check out the images directory to see other images you can work with
# And select one!
image = cv2.imread('images/monarch.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Prepare data for k-means
###Code
# Reshape image into a 2D array of pixels and 3 color values (RGB)
pixel_vals = image.reshape((-1,3))
# Convert to float type
pixel_vals = np.float32(pixel_vals)
###Output
_____no_output_____
###Markdown
Implement k-means clustering
###Code
# define stopping criteria
# you can change the number of max iterations for faster convergence!
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 0.2)
## TODO: Select a value for k
# then perform k-means clustering
k = 3
retval, labels, centers = cv2.kmeans(pixel_vals, k, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# convert data into 8-bit values
centers = np.uint8(centers)
segmented_data = centers[labels.flatten()]
# reshape data into the original image dimensions
segmented_image = segmented_data.reshape((image.shape))
labels_reshape = labels.reshape(image.shape[0], image.shape[1])
plt.imshow(segmented_image)
## TODO: Visualize one segment, try to find which is the leaves, background, etc!
plt.imshow(labels_reshape==0, cmap='gray')
# mask an image segment by cluster
cluster = 0 # the first cluster
masked_image = np.copy(image)
# turn the mask green!
masked_image[labels_reshape == cluster] = [0, 255, 0]
plt.imshow(masked_image)
###Output
_____no_output_____
###Markdown
K-means Clustering Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
## TODO: Check out the images directory to see other images you can work with
# And select one!
image = cv2.imread('images/monarch.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Prepare data for k-means
###Code
# Reshape image into a 2D array of pixels and 3 color values (RGB)
pixel_vals = image.reshape((-1,3))
# Convert to float type
pixel_vals = np.float32(pixel_vals)
###Output
_____no_output_____
###Markdown
Implement k-means clustering
###Code
# define stopping criteria
# you can change the number of max iterations for faster convergence!
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.2)
## TODO: Select a value for k
# then perform k-means clustering
k = 3
retval, labels, centers = cv2.kmeans(pixel_vals, k, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# convert data into 8-bit values
centers = np.uint8(centers)
segmented_data = centers[labels.flatten()]
# reshape data into the original image dimensions
segmented_image = segmented_data.reshape((image.shape))
labels_reshape = labels.reshape(image.shape[0], image.shape[1])
plt.imshow(segmented_image)
## TODO: Visualize one segment, try to find which is the leaves, background, etc!
plt.imshow(labels_reshape==0, cmap='gray')
# mask an image segment by cluster
cluster = 0 # the first cluster
masked_image = np.copy(image)
# turn the mask green!
masked_image[labels_reshape == cluster] = [0, 255, 0]
plt.imshow(masked_image)
###Output
_____no_output_____
###Markdown
K-means Clustering Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
## TODO: Check out the images directory to see other images you can work with
# And select one!
image = cv2.imread('images/monarch.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
print(image.shape)
###Output
(2000, 3008, 3)
###Markdown
Prepare data for k-means
###Code
# Reshape image into a 2D array of pixels and 3 color values (RGB)
pixel_vals = image.reshape((-1,3))
# Convert to float type
pixel_vals = np.float32(pixel_vals)
print(pixel_vals.shape)
###Output
(6016000, 3)
###Markdown
Implement k-means clustering
###Code
# define stopping criteria
# you can change the number of max iterations for faster convergence!
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.01)
## TODO: Select a value for k
# then perform k-means clustering
k = 6
retval, labels, centers = cv2.kmeans(pixel_vals, k, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# convert data into 8-bit values
centers = np.uint8(centers)
segmented_data = centers[labels.flatten()]
# reshape data into the original image dimensions
segmented_image = segmented_data.reshape((image.shape))
labels_reshape = labels.reshape(image.shape[0], image.shape[1])
plt.imshow(segmented_image)
## TODO: Visualize one segment, try to find which is the leaves, background, etc!
plt.imshow(labels_reshape==5, cmap='gray')
# mask an image segment by cluster
cluster = 0 # the first cluster
masked_image = np.copy(image)
# turn the mask green!
masked_image[labels_reshape == 3] = [0, 0, 0]
masked_image[labels_reshape == 5] = [0, 0, 0]
#masked_image[labels_reshape == 2] = [0, 255, 0]
#masked_image[labels_reshape == 4] = [0, 255, 0]
plt.imshow(masked_image)
###Output
_____no_output_____
###Markdown
K-means Clustering Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
## TODO: Check out the images directory to see other images you can work with
# And select one!
image = cv2.imread('images/monarch.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Prepare data for k-means
###Code
# Reshape image into a 2D array of pixels and 3 color values (RGB)
pixel_vals = image.reshape((-1,3))
# Convert to float type
pixel_vals = np.float32(pixel_vals)
###Output
_____no_output_____
###Markdown
Implement k-means clustering
###Code
# define stopping criteria
# you can change the number of max iterations for faster convergence!
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.2)
## TODO: Select a value for k
# then perform k-means clustering
k = 3
retval, labels, centers = cv2.kmeans(pixel_vals, k, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# convert data into 8-bit values
centers = np.uint8(centers)
segmented_data = centers[labels.flatten()]
# reshape data into the original image dimensions
segmented_image = segmented_data.reshape((image.shape))
labels_reshape = labels.reshape(image.shape[0], image.shape[1])
plt.imshow(segmented_image)
## TODO: Visualize one segment, try to find which is the leaves, background, etc!
plt.imshow(labels_reshape==0, cmap='gray')
# mask an image segment by cluster
cluster = 0 # the first cluster
masked_image = np.copy(image)
# turn the mask green!
masked_image[labels_reshape == cluster] = [0, 255, 0]
plt.imshow(masked_image)
###Output
_____no_output_____
###Markdown
K-means Clustering Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
## TODO: Check out the images directory to see other images you can work with
# And select one!
image = cv2.imread('images/monarch.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
image.shape
###Output
_____no_output_____
###Markdown
Prepare data for k-means
###Code
# Reshape image into a 2D array of pixels and 3 color values (RGB)
pixel_vals = image.reshape((-1,3))
# Convert to float type
pixel_vals = np.float32(pixel_vals)
pixel_vals.shape
###Output
_____no_output_____
###Markdown
Implement k-means clustering
###Code
# define stopping criteria
# you can change the number of max iterations for faster convergence!
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.2)
## TODO: Select a value for k
# then perform k-means clustering
k = 3
retval, labels, centers = cv2.kmeans(pixel_vals, k, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# convert data into 8-bit values
centers = np.uint8(centers)
segmented_data = centers[labels.flatten()]
# reshape data into the original image dimensions
segmented_image = segmented_data.reshape((image.shape))
labels_reshape = labels.reshape(image.shape[0], image.shape[1])
plt.imshow(segmented_image)
## TODO: Visualize one segment, try to find which is the leaves, background, etc!
plt.imshow(labels_reshape==2, cmap='gray')
# mask an image segment by cluster
cluster = 0 # the first cluster
masked_image = np.copy(image)
# turn the mask green!
masked_image[labels_reshape == cluster] = [0, 255, 0]
plt.imshow(masked_image)
###Output
_____no_output_____
###Markdown
K-means Clustering Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
## TODO: Check out the images directory to see other images you can work with
# And select one!
image = cv2.imread('images/monarch.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Prepare data for k-means
###Code
# Reshape image into a 2D array of pixels and 3 color values (RGB)
pixel_vals = image.reshape((-1,3))
# Convert to float type
pixel_vals = np.float32(pixel_vals)
###Output
_____no_output_____
###Markdown
Implement k-means clustering
###Code
# define stopping criteria
# you can change the number of max iterations for faster convergence!
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.2)
## TODO: Select a value for k
# then perform k-means clustering
k = 3
retval, labels, centers = cv2.kmeans(pixel_vals, k, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# convert data into 8-bit values
centers = np.uint8(centers)
segmented_data = centers[labels.flatten()]
# reshape data into the original image dimensions
segmented_image = segmented_data.reshape((image.shape))
labels_reshape = labels.reshape(image.shape[0], image.shape[1])
plt.imshow(segmented_image)
## TODO: Visualize one segment, try to find which is the leaves, background, etc!
plt.imshow(labels_reshape==2, cmap='gray')
# mask an image segment by cluster
cluster = 2 # the first cluster
masked_image = np.copy(image)
# turn the mask green!
masked_image[labels_reshape == cluster] = [0, 255, 0]
plt.imshow(masked_image)
###Output
_____no_output_____
###Markdown
K-means Clustering Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
## TODO: Check out the images directory to see other images you can work with
# And select one!
image = cv2.imread('images/monarch.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Prepare data for k-means
###Code
# Reshape image into a 2D array of pixels and 3 color values (RGB)
pixel_vals = image.reshape((-1,3))
# Convert to float type
pixel_vals = np.float32(pixel_vals)
###Output
_____no_output_____
###Markdown
Implement k-means clustering
###Code
# define stopping criteria
# you can change the number of max iterations for faster convergence!
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.2)
## TODO: Select a value for k
# then perform k-means clustering
k = 4
retval, labels, centers = cv2.kmeans(pixel_vals, k, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# convert data into 8-bit values
centers = np.uint8(centers)
segmented_data = centers[labels.flatten()]
# reshape data into the original image dimensions
segmented_image = segmented_data.reshape((image.shape))
labels_reshape = labels.reshape(image.shape[0], image.shape[1])
plt.imshow(segmented_image)
## TODO: Visualize one segment, try to find which is the leaves, background, etc!
plt.imshow(labels_reshape==0)
plt.imshow(labels_reshape==1)
plt.imshow(labels_reshape==2)
plt.imshow(labels_reshape==3)
# mask an image segment by cluster
cluster = 0 # the first cluster
masked_image = np.copy(image)
# turn the mask green!
masked_image[labels_reshape == cluster] = [0, 255, 0]
plt.imshow(masked_image)
###Output
_____no_output_____
###Markdown
K-means Clustering Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
## TODO: Check out the images directory to see other images you can work with
# And select one!
image = cv2.imread('images/monarch.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Prepare data for k-means
###Code
# Reshape image into a 2D array of pixels and 3 color values (RGB)
pixel_vals = image.reshape((-1,3))
# Convert to float type
pixel_vals = np.float32(pixel_vals)
###Output
_____no_output_____
###Markdown
Implement k-means clustering
###Code
# define stopping criteria
# you can change the number of max iterations for faster convergence!
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 50, 1)
## TODO: Select a value for k
# then perform k-means clustering
k = 8
retval, labels, centers = cv2.kmeans(pixel_vals, k, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# convert data into 8-bit values
centers = np.uint8(centers)
segmented_data = centers[labels.flatten()]
# reshape data into the original image dimensions
segmented_image = segmented_data.reshape((image.shape))
labels_reshape = labels.reshape(image.shape[0], image.shape[1])
plt.imshow(segmented_image)
print(segmented_image.shape)
# dsize
dsize = (400, 300)
# resize image
output = cv2.resize(segmented_image, dsize)
plt.imshow(output)
print(output.shape)
output = cv2.cvtColor(output, cv2.COLOR_RGB2BGR)
cv2.imwrite( "kmeans.jpg", output );
## TODO: Visualize one segment, try to find which is the leaves, background, etc!
plt.imshow(labels_reshape==2, cmap='gray')
# mask an image segment by cluster
cluster = 0 # the first cluster
masked_image = np.copy(image)
# turn the mask green!
masked_image[labels_reshape == cluster] = [0, 255, 0]
plt.imshow(masked_image)
###Output
_____no_output_____
###Markdown
K-means Clustering Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
## TODO: Check out the images directory to see other images you can work with
# And select one!
image = cv2.imread('images/monarch.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Prepare data for k-means
###Code
# Reshape image into a 2D array of pixels and 3 color values (RGB)
pixel_vals = image.reshape((-1,3))
# Convert to float type
pixel_vals = np.float32(pixel_vals)
###Output
_____no_output_____
###Markdown
Implement k-means clustering
###Code
# define stopping criteria
# you can change the number of max iterations for faster convergence!
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.2)
## TODO: Select a value for k
# then perform k-means clustering
k = 3
retval, labels, centers = cv2.kmeans(pixel_vals, k, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# convert data into 8-bit values
centers = np.uint8(centers)
segmented_data = centers[labels.flatten()]
# reshape data into the original image dimensions
segmented_image = segmented_data.reshape((image.shape))
labels_reshape = labels.reshape(image.shape[0], image.shape[1])
plt.imshow(segmented_image)
## TODO: Visualize one segment, try to find which is the leaves, background, etc!
plt.imshow(labels_reshape==0, cmap='gray')
# mask an image segment by cluster
cluster = 0 # the first cluster
masked_image = np.copy(image)
# turn the mask green!
masked_image[labels_reshape == cluster] = [0, 255, 0]
plt.imshow(masked_image)
###Output
_____no_output_____ |
XGBoost/xgBoost_shap.ipynb | ###Markdown
**XGBoost_shap** **1.Abstract**The notebook aims to explore using shap for the model interpretability **1.1.Shapley values**Shapley values calculate the importance of a feature by comparing what a model predicts with and without the feature. However, since the order in which a model sees features can affect its predictions, this is done in every possible order, so that the features are fairly compared. **2.xgboost shap**
###Code
import xgboost
import shap
import numpy as np
from matplotlib import pyplot
import pandas as pd
from sklearn.preprocessing import OneHotEncoder
# load JS visualization code to notebook
shap.initjs()
###Output
_____no_output_____
###Markdown
**3. Loading Data in csv**
###Code
data = pd.read_csv('C:/Users/abhig/Desktop/Linera Regression/insurance.csv')
data.head()
###Output
_____no_output_____
###Markdown
**4.Data Preprocessing** **4.1.Encoding**
###Code
data['sex'] = data.sex.map({'male':0, 'female':1})
data['smoker'] = data.smoker.map({'no':0, 'yes':1})
data.head(20)
###Output
_____no_output_____
###Markdown
**4.2.One hot Enocding**
###Code
# creating instance of one-hot-encoder
enc = OneHotEncoder()
# passing bridge-types-cat column (label encoded values of bridge_types)
enc_df = pd.DataFrame(enc.fit_transform(data[['region']]).toarray())
enc_df
enc_df.columns = ['northeast','northwest','southeast','southwest']
enc_df.apply(np.int64)
data =data.join(enc_df)
data=data.drop(['region'],axis=1)
data
###Output
_____no_output_____
###Markdown
**4.3.Spliting the data traina and test**
###Code
from sklearn.model_selection import train_test_split
X = data[ ['age', 'bmi', 'children', 'smoker','northeast','northwest', 'southeast', 'southwest']]
y = data['charges']
X_t, X_test, y_t, y_test = train_test_split(X, y, test_size=0.05, random_state=1)
X_train, X_val, y_train, y_val = train_test_split(X_t, y_t, test_size=0.15, random_state=1)
###Output
_____no_output_____
###Markdown
**5.Shap** **5.1.Shap Force plot**
###Code
# train XGBoost model
model = xgboost.train({"learning_rate": 0.01}, xgboost.DMatrix(X, label=y), 100)
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(X)
# visualize the first prediction's explanation (use matplotlib=True to avoid Javascript)
shap.force_plot(explainer.expected_value, shap_values[1,:], X.iloc[1,:])
shap.force_plot(explainer.expected_value, shap_values[25,:], X.iloc[25,:])
###Output
_____no_output_____ |
tasks/extract_keywords/notebooks/pdf_keyword_extraction.ipynb | ###Markdown
Reading the data
###Code
# Reading data
INPUT_PATH = os.path.join(PROJECT_ROOT, "tasks", "extract_text", "output")
with open(os.path.join(INPUT_PATH, "pdf_files.json")) as json_file:
data = json.load(json_file)
df = pd.DataFrame(
{
"filename": data.keys(),
"country": [i["Country"] for i in data.values()],
"text": [i["Text"] for i in data.values()]
}
)
# Creating word count field
df['word_count'] = df['text'].apply(lambda x: len(str(x).split(" ")))
df.count()
# Removing document without text
rmv = df.index[df['word_count'] == 1].tolist()
print(df.loc[rmv, 'filename'])
df = df.drop(rmv).reset_index(drop=True)
df.count()
# Removing badly read documents
bad_docs = ["CreditoGanadero_Mexico", "Ley Especial Cafe_ElSalvador", "Sembrando Vida Report"]
df = df.drop(df.index[df['filename'].isin(bad_docs)].tolist()).reset_index(drop=True)
df.count()
df.head()
df.count()
###Output
_____no_output_____
###Markdown
Preprocessing the data Experiment: Using a stanza pipeline -> turns out that lemmatization is not as necessary for now
###Code
# import stanza
# nlp = stanza.Pipeline(lang='es', processors='tokenize,mwt,pos,lemma')
# lemmatize_pipeline = stanza.Pipeline(lang='es', processors='tokenize, lemma')
# def lemmatize_text(text):
# lemmatized_text = lemmatize_pipeline(text)
# return " ".join([word.lemma for sentence in lemmatized_text.sentences for word in sentence.words])
# df["pre_pretext"] = df["pre_pretext"].apply(lambda x: lemmatize_text(x))
###Output
_____no_output_____
###Markdown
Mix common stopwords with words that we know are frequent, such as dates
###Code
spa_stopwords = set(stopwords.words('spanish'))
extra_stopwords = {"ley", "artículo", "ser", "así", "según", "nº", "diario",
"enero", "febrero", "marzo", "abril", "mayo", "junio", "julio", "agosto", "setiembre", "octubre", "noviembre", "diciembre",
"lunes", "martes", "miercoles", "jueves", "viernes", "sabado", "domingo"}
spa_stopwords = spa_stopwords.union(extra_stopwords)
prep = CorpusPreprocess(
language='spanish',
stop_words=spa_stopwords,
lowercase=True,
strip_accents=True,
strip_numbers=True,
punctuation_list=punctuation,
strip_urls=True,
# stemmer=SnowballStemmer('spanish'),
max_df=0.9,
min_df=2
)
df['prep_text'] = prep.fit_transform(df['text'], tokenize=False)
df.head()
###Output
_____no_output_____
###Markdown
Word count for each document
###Code
# Fetch word count for each document
df['word_count'].plot(kind='box')
plt.show()
# Describe word count
df['word_count'].describe()
###Output
_____no_output_____
###Markdown
Should we weight each document? Otherwise we could find keywords that do not represent each document in the same way. Bag-of-Words
###Code
# Count Vectorizer
cv = CountVectorizer(max_features=20000, ngram_range=(1,7))
bow_X = cv.fit_transform(df['prep_text'])
# Get top uni-grams
top_unigrams = get_top_n_ngrams(bow_X, cv.vocabulary_, 1, 20)
plt.bar(top_unigrams.keys(), top_unigrams.values())
plt.xticks(rotation=90)
plt.ylabel('freq')
plt.title('Top 20 unigrams')
plt.show()
# Get top bi-grams
top_bigrams = get_top_n_ngrams(bow_X, cv.vocabulary_, 2, 20)
plt.bar(top_bigrams.keys(), top_bigrams.values())
plt.xticks(rotation=90)
plt.ylabel('freq')
plt.title('Top 20 bigrams')
plt.show()
# Get top tri-grams
top_trigrams = get_top_n_ngrams(bow_X, cv.vocabulary_, 3, 20)
plt.bar(top_trigrams.keys(), top_trigrams.values())
plt.xticks(rotation=90)
plt.ylabel('freq')
plt.title('Top 20 trigrams')
plt.show()
top_trigrams
###Output
_____no_output_____
###Markdown
What if we want to normalize by word counts?
###Code
bow_X_norm = bow_X / bow_X.sum(axis=1)
# Get top uni-grams
top_unigrams = get_top_n_ngrams(bow_X_norm, cv.vocabulary_, 1, 20)
plt.bar(top_unigrams.keys(), top_unigrams.values())
plt.xticks(rotation=90)
plt.ylabel('freq')
plt.title('Top 20 unigrams')
plt.show()
# Get top bi-grams
top_bigrams = get_top_n_ngrams(bow_X_norm, cv.vocabulary_, 2, 20)
plt.bar(top_bigrams.keys(), top_bigrams.values())
plt.xticks(rotation=90)
plt.ylabel('freq')
plt.title('Top 20 bigrams')
plt.show()
# Get top tri-grams
top_trigrams = get_top_n_ngrams(bow_X_norm, cv.vocabulary_, 3, 20)
plt.bar(top_trigrams.keys(), top_trigrams.values())
plt.xticks(rotation=90)
plt.ylabel('freq')
plt.title('Top 20 trigrams')
plt.show()
top_trigrams
###Output
_____no_output_____
###Markdown
TF-IDF
###Code
# Count Vectorizer
tv = TfidfVectorizer(max_features=20000, ngram_range=(1,3))
tfidf_X = tv.fit_transform(df['prep_text'])
# Get top uni-grams
top_unigrams = get_top_n_ngrams(tfidf_X, tv.vocabulary_, 1, 20)
plt.bar(top_unigrams.keys(), top_unigrams.values())
plt.xticks(rotation=90)
plt.ylabel('freq')
plt.title('Top 20 unigrams')
plt.show()
# Get top bi-grams
top_bigrams = get_top_n_ngrams(tfidf_X, cv.vocabulary_, 2, 20)
plt.bar(top_bigrams.keys(), top_bigrams.values())
plt.xticks(rotation=90)
plt.ylabel('freq')
plt.title('Top 20 bigrams')
plt.show()
# Get top tri-grams
top_trigrams = get_top_n_ngrams(tfidf_X, cv.vocabulary_, 3, 20)
plt.bar(top_trigrams.keys(), top_trigrams.values())
plt.xticks(rotation=90)
plt.ylabel('freq')
plt.title('Top 20 trigrams')
plt.show()
###Output
_____no_output_____
###Markdown
Can we see keywords for single document?
###Code
print(df.loc[40, "text"][:1000],"...")
print('\nGet top uni-grams bow:')
for k, v in get_top_n_ngrams(bow_X[40], cv.vocabulary_, 1, 10).items():
print(f"\"{k}\" count: {round(v,3)}")
print('\nGet top uni-grams tfidf:')
for k, v in get_top_n_ngrams(tfidf_X[40], tv.vocabulary_, 1, 10).items():
print(f"\"{k}\" count: {round(v,3)}")
###Output
_____no_output_____
###Markdown
Word cloud BOW
###Code
sorted_vocab = {k: v for k, v in sorted(cv.vocabulary_.items(), key=lambda item: item[1])}
frequencies = np.asarray(bow_X.sum(axis=0)).flatten()
word_freq = {k:v for k, v in zip(sorted_vocab.keys(), frequencies)}
wordcloud = WordCloud(
background_color='white',
max_words=100,
max_font_size=50,
random_state=42
).generate_from_frequencies(word_freq)
fig = plt.figure(figsize=(13, 13))
plt.imshow(wordcloud)
plt.axis('off')
plt.show()
# fig.savefig("word1.png", dpi=900)
###Output
_____no_output_____
###Markdown
BOW normalized
###Code
sorted_vocab = {k: v for k, v in sorted(cv.vocabulary_.items(), key=lambda item: item[1])}
frequencies = np.asarray(bow_X_norm.sum(axis=0)).flatten()
word_freq = {k:v for k, v in zip(sorted_vocab.keys(), frequencies)}
wordcloud = WordCloud(
background_color='white',
max_words=100,
max_font_size=50,
random_state=42
).generate_from_frequencies(word_freq)
fig = plt.figure(figsize=(13, 13))
plt.imshow(wordcloud)
plt.axis('off')
plt.show()
# fig.savefig("word1.png", dpi=900)
###Output
_____no_output_____
###Markdown
TF-IDF
###Code
sorted_vocab = {k: v for k, v in sorted(tv.vocabulary_.items(), key=lambda item: item[1])}
frequencies = np.asarray(tfidf_X.sum(axis=0)).flatten()
word_freq = {k:v for k, v in zip(sorted_vocab.keys(), frequencies)}
wordcloud = WordCloud(
background_color='white',
max_words=100,
max_font_size=50,
random_state=42
).generate_from_frequencies(word_freq)
fig = plt.figure(figsize=(13, 13))
plt.imshow(wordcloud)
plt.axis('off')
plt.show()
# fig.savefig("word1.png", dpi=900)
###Output
_____no_output_____
###Markdown
Keyword extraction algorithms Preprocessing (keep sentence structure)
###Code
sentences = df['text'].apply(lambda x: sent_tokenize(x, language='spanish')).explode()
sentences
# Word count per sentence
sentences.str.split().apply(lambda x: len(x)).describe()
prep = CorpusPreprocess(
language='spanish',
stop_words=spa_stopwords,
lowercase=True,
strip_accents=True,
strip_numbers=True,
strip_punctuation=punctuation,
# stemmer=SnowballStemmer('spanish'),
max_df=0.9,
min_df=2
)
sentences_prep = pd.Series(prep.fit_transform(sentences, tokenize=False), index=sentences.index)
sentences_prep
###Output
_____no_output_____
###Markdown
Rake and TextRank
###Code
for ix in sentences_prep.index.unique():
# RAKE
rake = Rake(language="spanish")
rake.extract_keywords_from_sentences(sentences_prep[ix])
rake_out = rake.get_ranked_phrases()
print("\nRAKE OUTPUT:\n> ", "\n> ".join(rake_out[:10]))
# TextRankV1
textrankv1_out = keywords(" ".join(sentences_prep[ix]), split=True)
print("\nTEXTRANKV1 OUTPUT:\n> ", "\n> ".join(textrankv1_out[:10]))
# TextRankV2
textrankv2_out = summarize(". ".join(sentences_prep[ix]), split=True)
print("\nTEXTRANKV2 OUTPUT:\n> ", "\n> ".join(textrankv2_out[:10]))
break
###Output
_____no_output_____ |
tariff_map.ipynb | ###Markdown
Step 1: Get ShapefilesThe next couple of cells download the requisite shapefiles from the US census. They are unzipped in a folder called shapefiles and then county. So they are assuming some structure behind your folder setup.
###Code
print("")
print("**********************************************************************************")
print("Downloading Shape files")
print("")
cwd = os.getcwd()
county_url = "https://www2.census.gov/geo/tiger/TIGER2017/COUNTY/tl_2017_us_county.zip"
r = requests.get(county_url )
county_shapefile = zf.ZipFile(io.BytesIO(r.content))
county_shapefile.extractall(path = cwd + "\\shapefiles\\county")
del r, county_shapefile
###Output
**********************************************************************************
Downloading Shape files
###Markdown
Then do the same thing for states (so we can draw state lines as well). What's cool about these shapefiles is then you can layer on other stuff, roads, rivers, lakes.
###Code
state_url = "https://www2.census.gov/geo/tiger/TIGER2017/STATE/tl_2017_us_state.zip"
r = requests.get(state_url)
state_shapefile = zf.ZipFile(io.BytesIO(r.content))
state_shapefile.extractall(path = cwd + "\\shapefiles\\state")
del r, state_shapefile
###Output
_____no_output_____
###Markdown
Step 2: Some basic cleaningWe will grab the tariff data, compute the tariff change. Then we will merge it with the geopandas dataframe
###Code
# Grab the tradedata...
file_path = os.getcwd()
fig_path = file_path +"\\figures"
trade_data = pq.read_table(file_path + "\\data\\trade_employment_blssingle19.parquet").to_pandas()
trade_data["time"] = pd.to_datetime(trade_data.time)
trade_data.set_index(["area_fips", "time"],inplace = True)
trade_data["tariff_change"] = trade_data.groupby(["area_fips"]).tariff.diff(12)
trade_data.sort_values(["area_fips", "time"], inplace = True)
trade_data.head()
###Output
_____no_output_____
###Markdown
Now we will grab the county-level shapefile
###Code
cwd = os.getcwd()
county_shape = cwd + "\\shapefiles\\county\\tl_2017_us_county.shx"
us_map = gpd.read_file(county_shape)
us_map = us_map.to_crs({'init': 'epsg:3395'})
us_map["geometry"] = us_map["geometry"].simplify(200)
# This was important. The geometry in the tigerline file si
# too fine, orginal map was 350mb. simply basicly simplifies the geometry,
# making the map take up less memory and load faster. Still not sure
# what the number exactly means.
us_map.head()
###Output
_____no_output_____
###Markdown
A little bit more cleaning so a merge can be done.
###Code
us_map["area_fips"] = (us_map.STATEFP.astype(str) + us_map.COUNTYFP.astype(str)).astype(int)
tariff_df = trade_data.xs('2018-12-1', level=1).copy()
tariff_df["fips_code"] = tariff_df.index
tariff_df["fips_code"] = tariff_df["fips_code"].astype(int)
tariff_df.shape
lost_jobs = pd.read_csv(cwd + "\\data\\lost_jobs.csv")
lost_jobs.head()
tariff_df = tariff_df.merge(lost_jobs, left_on = "fips_code", right_on = "GEOFIPS", how = "inner", indicator = True)
###Output
_____no_output_____
###Markdown
Then merge the geopandas dataframe with the regular dataframe
###Code
us_map = us_map.merge(tariff_df[["tariff_change","2017_population","fips_code", "lost_jobs"]], left_on='area_fips',
right_on = "fips_code", how = "inner", indicator = True)
us_map.head()
###Output
_____no_output_____
###Markdown
Now we will drop Alaska and there stuff, bring in the state files too. Then plot.
###Code
us_map.set_index("STATEFP", inplace = True)
drop_list = ["02","15","72"]
us_map.drop(drop_list, inplace = True)
state_shape = cwd + "\\shapefiles\\state\\tl_2017_us_state.shx"
state_map = gpd.read_file(state_shape)
state_map = state_map.to_crs({'init': 'epsg:3395'})
state_map["geometry"] = state_map["geometry"].simplify(200)
state_fp_dict = dict(zip(state_map.STATEFP, state_map.STUSPS))
state_map.set_index("STATEFP", inplace = True)
drop_list = ["02","15","72","78","69","66","60",]
state_map.drop(drop_list, inplace = True)
us_map.reset_index(inplace = True)
us_map["STSPS"] = us_map["STATEFP"].map(state_fp_dict)
us_map["NAME"] = us_map["NAME"] + ", " + us_map["STSPS"]
us_map.set_index("STATEFP", inplace = True)
us_map["2017_population"] = us_map["2017_population"].map('{:,.0f}'.format)
us_map["lost_jobs"] = us_map["lost_jobs"] .round(0).astype(int)
us_map["lost_jobs"] = us_map["lost_jobs"].map('{:,.0f}'.format)
us_map["lost_cars"] = (-1.04)*us_map["tariff_change"]
us_map["lost_cars"] = us_map["lost_cars"].map('{:,.2f}'.format)
###Output
_____no_output_____
###Markdown
Step 3: Plot it. That's what we do below
###Code
us_map["q_tariff"] = pd.qcut(us_map["tariff_change"], 10,labels = False, duplicates='drop')
us_map.q_tariff.replace(np.nan,0,inplace = True)
from mpl_toolkits.axes_grid1 import make_axes_locatable
fig, ax = plt.subplots(1,1,figsize = (12,8))
plt.tight_layout()
plt.rcParams.update(plt.rcParamsDefault) # This will reset defaluts...
#################################################################################
# This is for the colorbar...
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="3%", pad=0.1)
#################################################################################
## This creates a discrete colorbar scheme...
# https://gist.github.com/jakevdp/91077b0cae40f8f8244a
N = 10
base = plt.cm.get_cmap("RdBu_r")
color_list = base(np.linspace(0, 1, N))
cmap_name = base.name + str(N)
dcmap = base.from_list(cmap_name, color_list, N)
#################################################################################
# This is the normal mapping...
us_map.plot(column='q_tariff', ax = ax,
# THIS IS NEW, it says color it based on this column
cmap=dcmap,
alpha = 0.75,
vmin=0, vmax=us_map.q_tariff.max())
#################################################################################
# This then alows me to generate and edit the colorbar....
# https://stackoverflow.com/questions/53158096/editing-colorbar-legend-in-geopandas
sm = plt.cm.ScalarMappable(cmap=dcmap)
sm._A = []
cbr = fig.colorbar(sm, cax=cax)
cbr.set_label('Percentile in Tariff Distribution')
cbr.set_alpha(0.15)
cbr.set_ticks([0.10, 0.25,0.50,0.75, 0.90])
cbr.set_ticklabels(["10","25","50","75","90"], update_ticks=True)
#################################################################################
state_map.geometry.boundary.plot(color=None, edgecolor='k', alpha = 0.35, ax = ax)
#################################################################################
# Then some final stuff to clean things up....
ax.spines["right"].set_visible(False)
ax.spines["top"].set_visible(False)
ax.spines["left"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
ax.set_title("US County Tariff Exposure to China (as of Dec 2018)", fontsize = 16, loc= "left" )
#ax.text(-127,23, "Source: US Census, BLS", fontsize = 8)
#fig_path = "C:\\github\\expenditure_tradeshocks\\figures"
if not os.path.exists(fig_path):
os.makedirs(fig_path)
plt.savefig(fig_path +"\\us_china_exports_map.png", bbox_inches = "tight", dip = 1200)
plt.show()
import json
from bokeh.io import show
from bokeh.models import (CDSView, ColorBar, ColumnDataSource,
CustomJS, CustomJSFilter,
GeoJSONDataSource, HoverTool,
LinearColorMapper, Slider)
from bokeh.layouts import column, row, widgetbox
from bokeh.palettes import brewer
from bokeh.plotting import figure
from bokeh.models import Title
from bokeh.plotting import figure, save
from bokeh.resources import CDN
from bokeh.embed import file_html
# Input GeoJSON source that contains features for plotting
#geosource = GeoJSONDataSource(geojson = us_map.to_json())
state_geosource = GeoJSONDataSource(geojson = state_map.to_json())
geosource = GeoJSONDataSource(geojson = us_map.to_json())
palette = brewer['RdBu'][10]
#https://docs.bokeh.org/en/latest/docs/reference/palettes.html
color_mapper = LinearColorMapper(palette = palette, low = 0, high = 10)
tick_labels = {0:"",2:"20",4:"40",6:"60",8:"80",10:""}
color_bar = ColorBar(color_mapper = color_mapper,
label_standoff = 8,
width = 20, height = 420,
border_line_color = None,
orientation = "vertical",
location=(0,0),major_label_overrides = tick_labels,
major_tick_line_alpha = 0)
label = "County-Level Tariff Exposure to China \n Colorbar reports percentile in tariff distribution"
# Create figure object.
p = figure(
plot_height = 530 ,
plot_width = 850,
toolbar_location = 'below',
tools = "box_zoom, reset")
descip = "Colorbar reports percentile in tariff distribution; Hover tool reports county name, tariff increase"
descip = descip + ", population, estimates of % change in autos and jobs lost"
p.add_layout(Title(text=descip, text_font_style="italic", text_font_size="9pt"), 'above')
p.add_layout(Title(text="County-Level Tariff Exposure to Chinese Retaliation", text_font_size="11pt"), 'above')
p.xgrid.grid_line_color = None
p.ygrid.grid_line_color = None
# Add patch renderer to figure.
states = p.patches('xs','ys', source = geosource,
fill_color = {"field" :'q_tariff',
"transform" : color_mapper},
line_color = "gray",
line_width = 0.25,
fill_alpha = 1)
state_line = p.multi_line('xs','ys', source = state_geosource,
line_color = "black",
line_width = 0.5)
# Create hover tool
p.add_tools(HoverTool(renderers = [states],
tooltips = [('County','@NAME'),
('Tariff Increase','@tariff_change'),
('Population','@2017_population'),
('Est. % Change in Auto Sales','@lost_cars'),
('Est. Lost Jobs','@lost_jobs'),]))
#### Some features to make it a bit nicer.
p.axis.visible = False
p.background_fill_color = "grey"
p.background_fill_alpha = 0.25
p.toolbar.autohide = True
p.add_layout(color_bar, "right")
## Send to doc file, create a webpage from doc file on github
# then had weebly webiste point to that .html file. That's how
# I got this to work.
file_path = os.getcwd()
doc_path = file_path +"\\docs"
outfp = doc_path + "\\us_china_exports_map.html"
# Save the map
save(p, outfp)
# Not sure if this is important, but seemed to start working once
# I ran it
html = file_html(p, CDN, outfp)
p.add_layout?
1.4e7
print("this\nhi")
###Output
this
hi
###Markdown
Step 1: Get ShapefilesThe next couple of cells download the requisite shapefiles from the US census. They are unzipped in a folder called shapefiles and then county. So they are assuming some structure behind your folder setup.
###Code
cwd = os.getcwd()
county_url = "https://www2.census.gov/geo/tiger/TIGER2017/COUNTY/tl_2017_us_county.zip"
r = requests.get(county_url )
county_shapefile = zf.ZipFile(io.BytesIO(r.content))
county_shapefile.extractall(path = cwd + "\\shapefiles\\county")
del r, county_shapefile
###Output
_____no_output_____
###Markdown
Then do the same thing for states (so we can draw state lines as well). What's cool about these shapefiles is then you can layer on other stuff, roads, rivers, lakes.
###Code
state_url = "https://www2.census.gov/geo/tiger/TIGER2017/STATE/tl_2017_us_state.zip"
r = requests.get(state_url)
state_shapefile = zf.ZipFile(io.BytesIO(r.content))
state_shapefile.extractall(path = cwd + "\\shapefiles\\state")
del r, state_shapefile
###Output
_____no_output_____
###Markdown
Step 2: Some basic cleaningWe will grab the tariff data, compute the tariff change. Then we will merge it with the geopandas dataframe
###Code
# Grab the tradedata...
file_path = os.getcwd()
trade_data = pq.read_table(file_path + "\\data\\total_trade_data.parquet").to_pandas()
trade_data["time"] = pd.to_datetime(trade_data.time)
trade_data.set_index(["area_fips", "time"],inplace = True)
trade_data["tariff_change"] = trade_data.groupby(["area_fips"]).tariff.diff(12)
trade_data.sort_values(["area_fips", "time"], inplace = True)
trade_data.head()
###Output
_____no_output_____
###Markdown
Now we will grab the county-level shapefile
###Code
cwd = os.getcwd()
county_shape = cwd + "\\shapefiles\\county\\tl_2017_us_county.shx"
us_map = gpd.read_file(county_shape)
us_map = us_map.to_crs({'init': 'epsg:3395'})
us_map.head()
###Output
_____no_output_____
###Markdown
A little bit more cleaning so a merge can be done.
###Code
us_map["area_fips"] = (us_map.STATEFP.astype(str) + us_map.COUNTYFP.astype(str)).astype(int)
tariff_df = trade_data.xs('2018-12-1', level=1).copy()
tariff_df["fips_code"] = tariff_df.index
tariff_df["fips_code"] = tariff_df["fips_code"].astype(int)
tariff_df.head()
###Output
_____no_output_____
###Markdown
Then merge the geopandas dataframe with the regular dataframe
###Code
us_map = us_map.merge(tariff_df[["tariff_change","fips_code"]], left_on='area_fips',
right_on = "fips_code", how = "inner", indicator = True)
us_map.head()
###Output
_____no_output_____
###Markdown
Now we will drop Alaska and there stuff, bring in the state files too. Then plot.
###Code
us_map.set_index("STATEFP", inplace = True)
drop_list = ["02","15","72"]
us_map.drop(drop_list, inplace = True)
state_shape = cwd + "\\shapefiles\\state\\tl_2017_us_state.shx"
state_map = gpd.read_file(state_shape)
state_map = state_map.to_crs({'init': 'epsg:3395'})
state_map.set_index("STATEFP", inplace = True)
drop_list = ["02","15","72","78","69","66","60",]
state_map.drop(drop_list, inplace = True)
###Output
_____no_output_____
###Markdown
Step 3: Plot it. That's what we do below
###Code
us_map["q_tariff"] = pd.qcut(us_map["tariff_change"], 10,labels = False, duplicates='drop')
us_map.q_tariff.replace(np.nan,0,inplace = True)
from mpl_toolkits.axes_grid1 import make_axes_locatable
fig, ax = plt.subplots(1,1,figsize = (12,8))
plt.tight_layout()
plt.rcParams.update(plt.rcParamsDefault) # This will reset defaluts...
#################################################################################
# This is for the colorbar...
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="3%", pad=0.1)
#################################################################################
## This creates a discrete colorbar scheme...
# https://gist.github.com/jakevdp/91077b0cae40f8f8244a
N = 10
base = plt.cm.get_cmap("RdBu_r")
color_list = base(np.linspace(0, 1, N))
cmap_name = base.name + str(N)
dcmap = base.from_list(cmap_name, color_list, N)
#################################################################################
# This is the normal mapping...
us_map.plot(column='q_tariff', ax = ax,
# THIS IS NEW, it says color it based on this column
cmap=dcmap,
alpha = 0.75,
vmin=0, vmax=us_map.q_tariff.max())
#################################################################################
# This then alows me to generate and edit the colorbar....
# https://stackoverflow.com/questions/53158096/editing-colorbar-legend-in-geopandas
sm = plt.cm.ScalarMappable(cmap=dcmap)
sm._A = []
cbr = fig.colorbar(sm, cax=cax)
cbr.set_label('Percentile in Tariff Distribution')
cbr.set_alpha(0.15)
cbr.set_ticks([0.10, 0.25,0.50,0.75, 0.90])
cbr.set_ticklabels(["10","25","50","75","90"], update_ticks=True)
#################################################################################
state_map.geometry.boundary.plot(color=None, edgecolor='k', alpha = 0.35, ax = ax)
#################################################################################
# Then some final stuff to clean things up....
ax.spines["right"].set_visible(False)
ax.spines["top"].set_visible(False)
ax.spines["left"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
ax.set_title("US County Tariff Exposure to China (as of Dec 2018)", fontsize = 16, loc= "left" )
#ax.text(-127,23, "Source: US Census, BLS", fontsize = 8)
fig_path = "C:\\github\\expenditure_tradeshocks\\figures"
plt.savefig(fig_path +"\\us_china_exports_map.png", bbox_inches = "tight", dip = 1200)
plt.show()
###Output
_____no_output_____ |
fig2_cross_domain_comparison.ipynb | ###Markdown
Comparison of the molecular domain between cell lines and tumors for breast cancerThis notebook supports the second figure. It takes data from cell lines, PDXs and tumors, compute the domain-specific factors and compare them using the cosine similarity matrix.Finally, tumor data is projected on each of these domain-specific factors and variance explained is computed to see how tumor variance is supported.This figure also supports Fig Supp 1.
###Code
# Tissue to consider
tumor_type = 'Breast'
cell_line_type = 'BRCA'
pdx_type = 'BRCA'
# Normalization parameters
normalization = 'TMM'
transformation = 'log'
mean_center = True
std_unit = False
protein_coding_only = True
import os, sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib
from sklearn.decomposition import PCA, FastICA, SparsePCA
from sklearn.externals.joblib import Parallel, delayed
import matplotlib.cm as cm
plt.style.use('ggplot')
#Import src implementations
os.environ['OMP_NUM_THREADS'] = '1'
os.environ['KMP_DUPLICATE_LIB_OK']='True'
from data_reader.read_data import read_data
from normalization_methods.feature_engineering import feature_engineering
###Output
_____no_output_____
###Markdown
Import data
###Code
# Import tumor + cell line data (count data)
x_target, x_source, g, _, _ = read_data('cell_line',
'tumor',
'count',
cell_line_type,
tumor_type,
remove_mytochondria=False)
cl_vs_t = {'source':x_source,
'target':x_target}
cl_vs_t_genes = g
del g, x_target, x_source
print('Cell lines vs Tumors data imported')
# Import tumor + pdx data (FPKM)
x_target, x_source, g, _, _ = read_data('pdx',
'tumor',
'fpkm',
pdx_type,
tumor_type,
remove_mytochondria=False)
pdx_vs_t = {'source':x_source,
'target':x_target}
pdx_vs_t_genes = g
del g, x_target, x_source
print('PDX vs tumors data imported')
# Import PDX + cell-line data (FPKM)
x_target, x_source, g, _, _ = read_data('cell_line',
'pdx',
'fpkm',
cell_line_type,
pdx_type,
remove_mytochondria=False)
cl_vs_pdx = {'source':x_source,
'target':x_target}
cl_vs_pdx_genes = g
del g, x_target, x_source
print('Cell lines vs PDX data imported')
# Normalization & Transformation for RNA-Seq data
for e in [cl_vs_t, pdx_vs_t, cl_vs_pdx]:
e['source'] = feature_engineering(e['source'], normalization, transformation, mean_center, std_unit)
e['target'] = feature_engineering(e['target'], normalization, transformation, mean_center, std_unit)
###Output
_____no_output_____
###Markdown
Cosines similarity computationComputes and plot the cosines similarity and plot it. Also breaks down the results per PC to show the overlap.
###Code
number_components = 20
def compute_components_PCA(x):
pca_instance = PCA(number_components)
pca_instance.fit(x)
return pca_instance.components_
def compute_components_Sparse_PCA(x):
pca_instance = SparsePCA(number_components, verbose=10)
pca_instance.fit(x)
print('computed')
return pca_instance.components_
def compute_components_ICA(x):
ica_instance = Fast(number_components, n_jobs=3)
ica_instance.fit(x)
print('COMPUTED')
return orth(ica_instance.mixing_).transpose()
def compute_cosine_similarity(data, dim_red_method):
source_components = dim_red_method(data['source'])
target_components = dim_red_method(data['target'])
components = {
'source':source_components,
'target':target_components
}
return source_components.dot(target_components.transpose()), components
compute_components = compute_components_PCA
cl_vs_t_cosine_similarity, cl_vs_t_components = compute_cosine_similarity(cl_vs_t, compute_components)
pdx_vs_t_cosine_similarity, pdx_vs_t_components = compute_cosine_similarity(pdx_vs_t, compute_components)
cl_vs_pdx_cosine_similarity, cl_vs_pdx_components = compute_cosine_similarity(cl_vs_pdx, compute_components)
# Plot cosines similarity between cell lines and tumors
sns.heatmap(np.abs(cl_vs_t_cosine_similarity), cmap='seismic_r',\
center=0, vmax=1., vmin=0)
plt.ylabel('Cell lines', fontsize=25, color='black')
plt.xlabel('Tumors', fontsize=25, color='black')
plt.xticks(np.arange(.5,number_components,2), range(1,number_components+1,2), fontsize=15, color='black')
plt.yticks(np.arange(.5,number_components,2), range(1,number_components+1,2), fontsize=15, color='black')
plt.tight_layout()
if tumor_type == 'Breast':
plt.savefig('./figures/fig2_cosines_similarity_cell_lines_tumors_RNAseq_%s_%s.png'%(tumor_type, cell_line_type.replace('/','')),\
dpi=300)
else:
plt.savefig('./figures/supp_fig2_cosines_similarity_cell_lines_tumors_RNAseq_%s_%s.png'%(tumor_type, cell_line_type.replace('/','')),\
dpi=300)
plt.show()
# Plot cosines similarity between pdx and tumors
sns.heatmap(np.abs(pdx_vs_t_cosine_similarity), cmap='seismic_r',\
center=0, vmax=1., vmin=0)
plt.ylabel('PDX', fontsize=25, color='black')
plt.xlabel('Tumors', fontsize=25, color='black')
plt.xticks(np.arange(.5,number_components,2), range(1,number_components+1,2), fontsize=15, color='black')
plt.yticks(np.arange(.5,number_components,2), range(1,number_components+1,2), fontsize=15, color='black')
plt.tight_layout()
if tumor_type == 'Breast':
plt.savefig('./figures/fig2_cosines_similarity_pdx_tumors_RNAseq_%s_%s.png'%(tumor_type, pdx_type.replace('/','')),\
dpi=300)
else:
plt.savefig('./figures/supp_fig2_cosines_similarity_pdx_tumors_RNAseq_%s_%s.png'%(tumor_type, pdx_type.replace('/','')),\
dpi=300)
plt.show()
# Plot cosines similarity between cell lines and pdx
sns.heatmap(np.abs(cl_vs_pdx_cosine_similarity), cmap='seismic_r',\
center=0, vmax=1., vmin=0)
plt.ylabel('Cell lines', fontsize=25, color='black')
plt.xlabel('PDX', fontsize=25, color='black')
plt.xticks(np.arange(.5,number_components,2), range(1,number_components+1,2), fontsize=15, color='black')
plt.yticks(np.arange(.5,number_components,2), range(1,number_components+1,2), fontsize=15, color='black')
plt.tight_layout()
if tumor_type == 'Breast':
plt.savefig('./figures/fig2_cosines_similarity_cell_lines_pdx_RNAseq_%s_%s.png'%(tumor_type, pdx_type.replace('/','')),\
dpi=300)
else:
plt.savefig('./figures/supp_fig2_cosines_similarity_cell_lines_pdx_RNAseq_%s_%s.png'%(tumor_type, pdx_type.replace('/','')),\
dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Variance explained
###Code
# Tumor variance explained by cell lines
def target_variance_projected(data, components):
target_projected_variance = np.var(data['target'].dot(components['target'].transpose()),0)
source_projected_variance = np.var(data['target'].dot(components['source'].transpose()),0)
target_total_variance = np.sum(np.var(data['target'], 0))
return {
'source': source_projected_variance / target_total_variance,
'target': target_projected_variance / target_total_variance
}
# Compute target projected variance
cl_vs_t_variance = target_variance_projected(cl_vs_t, cl_vs_t_components)
cl_vs_pdx_variance = target_variance_projected(cl_vs_pdx, cl_vs_pdx_components)
pdx_vs_t_variance = target_variance_projected(pdx_vs_t, pdx_vs_t_components)
#####
# Cell lines vs Tumors
#####
plt.figure(figsize=(8,5))
plt.plot(np.arange(1, number_components+1), cl_vs_t_variance['target'],\
label='Tumor Principal Component', linewidth=3)
plt.plot(np.arange(1, number_components+1), cl_vs_t_variance['source'],\
label='Cell line Principal Component', linewidth=3)
plt.xticks(np.arange(1, number_components+1, 2), fontsize=15, color='black')
max_var = cl_vs_t_variance['target'][0]
plt.ylim(0,1.1*max_var)
plt.yticks(np.arange(0, 1.1*max_var,0.02), (np.arange(0, 1.1*max_var,0.02)*100).astype(int), fontsize=15, color='black')
del max_var
plt.xlabel('Factor number', fontsize=20, color='black')
plt.ylabel('Proportion of tumor variance', fontsize=20, color='black')
plt.legend(fontsize=17)
plt.tight_layout()
if tumor_type == 'Breast':
plt.savefig('./figures/fig2_variance_explained_cl_vs_t_%s_%s.png'%(tumor_type, cell_line_type.replace('/','')),\
dpi=300)
else:
plt.savefig('./figures/supp_fig2_variance_explained_cl_vs_t_%s_%s.png'%(tumor_type, cell_line_type.replace('/','')),\
dpi=300)
plt.show()
#####
# PDX vs Tumors
#####
plt.figure(figsize=(8,5))
plt.plot(np.arange(1, number_components+1), pdx_vs_t_variance['target'],\
label='Tumor Principal Component', linewidth=3)
plt.plot(np.arange(1, number_components+1), pdx_vs_t_variance['source'],\
label='PDX Principal Component', linewidth=3)
plt.xticks(np.arange(1, number_components+1, 2), fontsize=15, color='black')
max_var = pdx_vs_t_variance['target'][0]
plt.ylim(0,1.1*max_var)
plt.yticks(np.arange(0, 1.1*max_var,0.02), (np.arange(0, 1.1*max_var,0.02)*100).astype(int), fontsize=15, color='black')
del max_var
plt.xlabel('Factor number', fontsize=20, color='black')
plt.ylabel('Proportion of tumor variance', fontsize=20, color='black')
plt.legend(fontsize=17)
plt.tight_layout()
if tumor_type == 'Breast':
plt.savefig('./figures/fig2_variance_explained_pdx_vs_t_%s_%s.png'%(tumor_type, pdx_type), dpi=300)
else:
plt.savefig('./figures/supp_fig2_variance_explained_pdx_vs_t_%s_%s.png'%(tumor_type, pdx_type), dpi=300)
plt.show()
#####
# Cell lines vs PDX
#####
plt.figure(figsize=(8,5))
plt.plot(np.arange(1, number_components+1), cl_vs_pdx_variance['target'],\
label='PDX Principal Component', linewidth=3)
plt.plot(np.arange(1, number_components+1), cl_vs_pdx_variance['source'],\
label='Cell line Principal Component', linewidth=3)
plt.xticks(np.arange(1, number_components+1, 2), fontsize=15)
max_var = cl_vs_pdx_variance['target'][0]
plt.ylim(0,1.1*max_var)
plt.yticks(np.arange(0, 1.1*max_var,0.02), (np.arange(0, 1.1*max_var,0.02)*100).astype(int), fontsize=12)
plt.xlabel('Factor number', fontsize=20)
plt.ylabel('Proportion of PDX variance', fontsize=20)
plt.legend(fontsize=17)
plt.tight_layout()
if tumor_type == 'Breast':
plt.savefig('./figures/fig2_variance_explained_cl_vs_t_%s_%s.png'%(tumor_type, pdx_type), dpi=300)
else:
plt.savefig('./figures/supp_fig2_variance_explained_cl_vs_t_%s_%s.png'%(tumor_type, pdx_type), dpi=300)
plt.show()
## Bootstrap analysis for variance
n_jobs = 5
def bootstrap_projected_variance(data_var, components, n=1):
np.random.seed()
bootstrapped_variance = []
for _ in range(n):
e = np.random.choice(range(data_var.shape[0]), size=data_var.shape[0], replace=True)
bootstrapped_variance.append(np.var(data_var[e].dot(components.transpose()),0))
return bootstrapped_variance
#####
# CL vs Tumor
#####
target = cl_vs_t['target']
source = cl_vs_t['source']
# Compute components
target_components = compute_components(target)
source_components = compute_components(source)
# Bootstrap target data and project it onto the different components.
n_bootstrap = 100
size_batch = 10
bootstrapped_target_variance = Parallel(n_jobs=n_jobs, verbose=10)\
(delayed(bootstrap_projected_variance)(target, target_components, size_batch)
for _ in range(int(n_bootstrap/size_batch)))
bootstrapped_target_variance = np.concatenate(bootstrapped_target_variance)
bootstrapped_source_variance = Parallel(n_jobs=n_jobs, verbose=10)\
(delayed(bootstrap_projected_variance)(target, source_components, size_batch)
for _ in range(int(n_bootstrap/size_batch)))
bootstrapped_source_variance = np.concatenate(bootstrapped_source_variance)
# Compute variance projected
target_proj_variance = np.var(target.dot(target_components.transpose()), 0)
source_proj_variance = np.var(target.dot(source_components.transpose()), 0)
target_var = np.sum(np.var(target,0))
source_proj_variance /= target_var
target_proj_variance /= target_var
bootstrapped_target_variance /= target_var
bootstrapped_source_variance /= target_var
# Plot figure
plt.figure(figsize=(8,5))
plt.plot(range(1, target_proj_variance.shape[0]+1), target_proj_variance, label='Tumor Principal Component')
plt.fill_between(range(1,target_proj_variance.shape[0]+1),
np.percentile(bootstrapped_target_variance, 1, axis=0),
np.percentile(bootstrapped_target_variance, 99, axis=0),
alpha=0.3)
plt.plot(range(1, source_proj_variance.shape[0]+1),source_proj_variance, label='Cell line Principal Component')
plt.fill_between(range(1, source_proj_variance.shape[0]+1),
np.percentile(bootstrapped_source_variance, 1, axis=0),
np.percentile(bootstrapped_source_variance, 99, axis=0),
alpha=0.3)
plt.xticks(np.arange(1, number_components+1, 2), fontsize=15, color='black')
max_var = np.percentile(bootstrapped_target_variance, 99, axis=0)[0]
plt.yticks(np.arange(0, 1.1*max_var,0.02), (np.arange(0, 1.1*max_var,0.02)*100).astype(int), fontsize=15, color='black')
del max_var
plt.xlabel('Factor number', fontsize=20, color='black')
plt.ylabel('Proportion of tumor variance', fontsize=20, color='black')
plt.legend(fontsize=17)
plt.tight_layout()
if tumor_type == 'Breast':
plt.savefig('./figures/fig2_variance_explained_bootstrapped_cl_vs_t_%s_%s_boot_%s.png'%(tumor_type,
cell_line_type.replace('/',''),
n_bootstrap),\
dpi=300)
plt.show()
#####
# PDX vs Tumors
#####
target = pdx_vs_t['target']
source = pdx_vs_t['source']
target_components = compute_components(target)
source_components = compute_components(source)
n_bootstrap = 100
size_batch = 10
bootstrapped_target_variance = Parallel(n_jobs=n_jobs, verbose=10)\
(delayed(bootstrap_projected_variance)(target, target_components, size_batch)
for _ in range(int(n_bootstrap/size_batch)))
bootstrapped_target_variance = np.concatenate(bootstrapped_target_variance)
bootstrapped_source_variance = Parallel(n_jobs=n_jobs, verbose=10)\
(delayed(bootstrap_projected_variance)(target, source_components, size_batch)
for _ in range(int(n_bootstrap/size_batch)))
bootstrapped_source_variance = np.concatenate(bootstrapped_source_variance)
target_proj_variance = np.var(target.dot(target_components.transpose()), 0)
source_proj_variance = np.var(target.dot(source_components.transpose()), 0)
target_var = np.sum(np.var(target,0))
source_proj_variance /= target_var
target_proj_variance /= target_var
bootstrapped_target_variance /= target_var
bootstrapped_source_variance /= target_var
plt.figure(figsize=(8,5))
plt.plot(range(1, target_proj_variance.shape[0]+1), target_proj_variance, label='Tumor Principal Component')
plt.fill_between(range(1,target_proj_variance.shape[0]+1),
np.percentile(bootstrapped_target_variance, 1, axis=0),
np.percentile(bootstrapped_target_variance, 99, axis=0),
alpha=0.3)
plt.plot(range(1, source_proj_variance.shape[0]+1),source_proj_variance, label='PDX Principal Component')
plt.fill_between(range(1, source_proj_variance.shape[0]+1),
np.percentile(bootstrapped_source_variance, 1, axis=0),
np.percentile(bootstrapped_source_variance, 99, axis=0),
alpha=0.3)
plt.xticks(np.arange(1, number_components+1, 2), fontsize=15, color='black')
max_var = np.percentile(bootstrapped_target_variance, 99, axis=0)[0]
plt.ylim(0,max_var)
plt.yticks(np.arange(0, 1.1*max_var,0.02), (np.arange(0, 1.1*max_var,0.02)*100).astype(int), fontsize=15, color='black')
del max_var
plt.xlabel('Factor number', fontsize=20, color='black')
plt.ylabel('Proportion of tumor variance', fontsize=20, color='black')
plt.legend(fontsize=17)
plt.tight_layout()
if tumor_type == 'Breast':
plt.savefig('./figures/fig2_variance_explained_bootstrapped_pdx_vs_t_%s_%s_boot_%s.png'%(tumor_type,
pdx_type,
n_bootstrap),\
dpi=300)
plt.show()
###Output
[Parallel(n_jobs=5)]: Using backend LokyBackend with 5 concurrent workers.
[Parallel(n_jobs=5)]: Done 3 out of 10 | elapsed: 9.1s remaining: 21.2s
[Parallel(n_jobs=5)]: Done 5 out of 10 | elapsed: 9.1s remaining: 9.1s
[Parallel(n_jobs=5)]: Done 7 out of 10 | elapsed: 14.0s remaining: 6.0s
[Parallel(n_jobs=5)]: Done 10 out of 10 | elapsed: 14.1s finished
[Parallel(n_jobs=5)]: Using backend LokyBackend with 5 concurrent workers.
[Parallel(n_jobs=5)]: Done 3 out of 10 | elapsed: 5.4s remaining: 12.6s
[Parallel(n_jobs=5)]: Done 5 out of 10 | elapsed: 5.4s remaining: 5.4s
[Parallel(n_jobs=5)]: Done 7 out of 10 | elapsed: 10.5s remaining: 4.5s
[Parallel(n_jobs=5)]: Done 10 out of 10 | elapsed: 10.6s finished
|
EHR_Only/GBT/.ipynb_checkpoints/Comp_SMOTE-checkpoint.ipynb | ###Markdown
General Population
###Code
from imblearn.over_sampling import SMOTE
sm = SMOTE(random_state = 42)
co_train_gpop_sm,out_train_hemorrhage_gpop_sm = sm.fit_resample(co_train_gpop,out_train_hemorrhage_gpop)
best_clf = xgBoost(co_train_gpop_sm, out_train_hemorrhage_gpop_sm)
scores(co_train_gpop_sm, out_train_hemorrhage_gpop_sm)
print()
scores(co_train_gpop, out_train_hemorrhage_gpop)
print()
scores(co_validation_gpop, out_validation_hemorrhage_gpop)
###Output
Fitting 5 folds for each of 4 candidates, totalling 20 fits
###Markdown
High Continuity
###Code
from imblearn.over_sampling import SMOTE
sm = SMOTE(random_state = 42)
co_train_high_sm,out_train_hemorrhage_high_sm = sm.fit_resample(co_train_high,out_train_hemorrhage_high)
best_clf = xgBoost(co_train_high_sm, out_train_hemorrhage_high_sm)
scores(co_train_high_sm, out_train_hemorrhage_high_sm)
print()
scores(co_train_high, out_train_hemorrhage_high)
print()
scores(co_validation_high, out_validation_hemorrhage_high)
###Output
Fitting 5 folds for each of 4 candidates, totalling 20 fits
###Markdown
Low Continuity
###Code
from imblearn.over_sampling import SMOTE
sm = SMOTE(random_state = 42)
co_train_low_sm,out_train_hemorrhage_low_sm = sm.fit_resample(co_train_low,out_train_hemorrhage_low)
best_clf = xgBoost(co_train_low_sm, out_train_hemorrhage_low_sm)
scores(co_train_low_sm, out_train_hemorrhage_low_sm)
print()
scores(co_train_low, out_train_hemorrhage_low)
print()
scores(co_validation_low, out_validation_hemorrhage_low)
###Output
Fitting 5 folds for each of 4 candidates, totalling 20 fits
|
notebooks/experiments/lark_test.ipynb | ###Markdown
Import Lark logicInstead of doing the entire parsing manually (after running through the first chapter), it's advised to use a prebuilt parser instead. It'll save more time.1. Collect samples of the language.2. Try fitting the example (by copying the existing lark rules.)
###Code
grammy()
json_parser = Lark(grammy(), parser="lalr")
# print(json_parser.parse("true;").pretty())
# print(json_parser.parse("false;").pretty())
# print(json_parser.parse("1234;").pretty())
print(json_parser.parse(
"""
var hello = 30;
var poop = hello + 10;
""").pretty())
# print(json_parser.parse(text).pretty())
# subtract - me;
# multiply * me;
# divide / me;
# for file in get_ddub():
# print(file.read_text())
# print(json_parser.parse(file.read_text()).pretty())
true; // Not false.
false; // Not *not* false.
###Output
_____no_output_____ |
tensorflow_label_interactive.ipynb | ###Markdown
Loop through models
###Code
preds = []
for name, shape in tqdm(models, total=len(models)):
preds.append(infer(name, shape, 'images'))
df = pd.DataFrame(preds)
keywords = Path('keywords.txt').read_text().splitlines()
images_and_classes = [[col, k] for k in keywords for col in df.columns if k in df[col].to_numpy()]
print('found classes:', {x[1] for x in images_and_classes})
df.T.rename({i: name for i, name in enumerate(models)}, axis=1)
times = pd.DataFrame(TIMES, columns=['model', 't']).sort_values('t').reset_index(drop=True)
times
###Output
_____no_output_____
###Markdown
Create class dirs
###Code
existing_classes = {f.name for f in Path(f'images').iterdir() if f.is_dir()}
matched_classes = set(cls for _, cls in images_and_classes)
classes = matched_classes - existing_classes
print(f'creating new class dirs: {classes}')
for cls in classes:
Path(f'images/{cls}').mkdir(parents=True, exist_ok=False)
for img, cls in images_and_classes:
try:
o = f'images/{img}'
n = f'images/{cls}/{img}'
print(f'{o} -> {n}')
Path(o).rename(n)
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
Undo Labeling if needed
###Code
# undo_labeling('images')
###Output
_____no_output_____ |
work/GFW_climate_biomass_widgets.ipynb | ###Markdown
GFW climate biomass widgets
###Code
#!pip install progressbar2
#!pip install retrying
import geopandas as gpd
import pandas as pd
import numpy as np
import requests
import os
import json
import progressbar
from retrying import retry
%matplotlib inline
###Output
_____no_output_____
###Markdown
Table with biomass density and total biomass **GADM 3.6 admin 2**
###Code
df = gpd.read_file('/Users/Ben/Downloads/gadm36_shp/gadm36.shp')
df.head()
#gadm_ids = df[['GID_0', 'ID_0', 'NAME_0', 'ID_1', 'NAME_1', 'ID_2', 'NAME_2','GID_1','GID_2']]
#gadm_ids[gadm_ids['GID_2'] == 'AFG.2.1_1']
#tmp = gadm_ids[gadm_ids['GID_0']=='BRA']
#tmp[tmp['GID_1'] == 'BRA.2_1'].head()
missing_df = df[df['GID_2'] == '']
f'{len(missing_df)/len(df) * 100:3.2f}% of rows are missing admin-2 id codes.'
def process_gid_2(gid_2):
"""Return dict of iso (string), and admin_1 and admin_2 (ints) from gid_2 entry."""
try:
iso, admin_1, tmp_admin_2 = gid_2.split('.')
admin_2 = tmp_admin_2.split('_')[0]
return {'iso':iso, 'admin_1':int(admin_1), 'admin_2':int(admin_2)}
except:
return None
# Create list of GIDS to process
all_areas = []
for x in df['GID_2'].values:
tmp = process_gid_2(x)
if tmp:
all_areas.append(tmp)
len(all_areas)
# Create gadm3.6 GID_2 data list
with open("./data/gadm_36_gid2.json", "w") as f:
for row in all_areas:
f.write(json.dumps(row) +'\n')
# now we have all the codes for all areas I am going to de-allocate the memory of the df to save RAM
df = 0
###Output
_____no_output_____
###Markdown
Begin here if gadm 3.6 data file exists
###Code
# Restore list of GID_2 data if the file exists
gid_list = "./data/gadm_36_gid2.json"
if os.path.exists(gid_list):
print("Found existing gadm-3.6 gid-2 file, restoring previous data! 🍺")
with open(gid_list,"r") as f:
all_areas = []
for row in f.readlines():
all_areas.append(json.loads(row))
print(f'Loaded {len(all_areas)} rows of data.')
all_areas[0:5]
###Output
_____no_output_____
###Markdown
The API contains an endpoint for `whrc-biomass` to compute the total biomass and biomass density of a given municipality which uses geostore v2 endpoint for gadm geometries.
###Code
len(all_areas)
# Use session to persist connection between requests (for speed-up) http://docs.python-requests.org/en/master/user/advanced/
s = requests.Session()
@retry(stop_max_attempt_number=5, wait_fixed=2000)
def make_query(area):
try:
r = s.get(f"https://production-api.globalforestwatch.org/v1/whrc-biomass/admin/{area['iso']}/{area['admin_1']}/{area['admin_2']}")
if r.status_code == 200:
return r.json().get('data').get('attributes')
else:
return None
except:
#print(f"Failed on {area['iso']}/{area['admin_1']}/{area['admin_2']}")
#raise IOError(f"EE failure: {r.status_code}")
return None
def find_in_written_data(written_data, iso, admin_1, admin_2):
for row in written_data:
if row.get('iso') == iso and row.get('admin_1') == admin_1 and row.get('admin_2') == admin_2:
return True
else:
pass
return False
def get_written_data(backup_file):
'''Create or restore data from a backup file e.g ./tmp_whrc_data.json '''
if os.path.exists(backup_file):
#print("Found existing file, restoring previous data! 🍺")
written_data = []
with open(backup_file, 'r') as f:
for line in f.readlines():
written_data.append(json.loads(line))
return written_data
else:
#print("No previous data found, starting queries from scratch... 🏃♂️")
return []
def check_writen_lenght():
check_data = []
with open("./tmp_whrc_data.json", 'r') as f:
for line in f.readlines():
check_data.append(json.loads(line))
print(f"Number of records sucessfully written: {len(check_data):,g}")
# Single thread process
# %%time
# with open(backup_file, "a+") as f:
# with progressbar.ProgressBar(max_value=len(all_areas)) as bar:
# for n, area in enumerate(all_areas[0:40]):
# bar.update(n)
# if not find_in_written_data(written_data, area.get('iso'), area.get('admin_1'), area.get('admin_2')):
# # maybe we should try it several times if it fails....
# tmp_data = make_query(area)
# if tmp_data:
# tmp_d = {**area, **tmp_data}
# written_data.append(tmp_d)
# f.write(json.dumps(tmp_d) +'\n') # write a line to a temporary file incase the process fails and all data is lost
# else:
# pass
def process_single_thread(gid_list, backup_file="./tmp_whrc_data.json"):
with open(backup_file, "a+") as f:
with progressbar.ProgressBar(max_value=len(gid_list)) as bar:
for n, area in enumerate(gid_list):
bar.update(n)
if not find_in_written_data(written_data, area.get('iso'), area.get('admin_1'), area.get('admin_2')):
# maybe we should try it several times if it fails....
tmp_data = make_query(area)
if tmp_data:
tmp_d = {**area, **tmp_data}
written_data.append(tmp_d)
f.write(json.dumps(tmp_d) +'\n') # write a line to a temporary file incase the process fails and all data is lost
else:
pass
def process_gid_list(gid_list, backup_file="./tmp_whrc_data.json"):
"""e.g. process_gid_list(all_areas[0:20])"""
written_data = get_written_data(backup_file)
with open(backup_file, "a+") as f:
#with progressbar.ProgressBar(max_value=len(gid_list)) as bar:
for n, area in enumerate(gid_list):
#bar.update(n)
#print(f"Already processed area = {find_in_written_data(written_data, area.get('iso'), area.get('admin_1'), area.get('admin_2'))}")
if not find_in_written_data(written_data, area.get('iso'), area.get('admin_1'), area.get('admin_2')):
tmp_data = make_query(area)
if tmp_data:
tmp_d = {**area, **tmp_data}
written_data.append(tmp_d)
f.write(json.dumps(tmp_d) +'\n') # write a line to a temporary file incase the process fails and all data is lost
else:
pass
###Output
_____no_output_____
###Markdown
Single thread requests
###Code
process_single_thread(all_areas[0:10000])
check_writen_lenght()
###Output
Number of records sucessfully written: 14,436
###Markdown
Multithreadded requests
###Code
from multiprocessing import Pool
len(all_areas)
step_size = 100
chunked_list = [all_areas[i:i + step_size] for i in range(0, len(all_areas[200000:200200]), step_size)]
print(f"{len(chunked_list)} chunks, with {len(chunked_list[0])} requests per chunk")
#chunked_list[0]
%%time
with Pool(100) as p:
p.map(process_gid_list, chunked_list)
check_writen_lenght()
###Output
_____no_output_____
###Markdown
Load the written data and create a final output file
###Code
# # If you need to load/restore the data from a tmp file (due to failure etc) you can do the following...
written_data = []
with open("./tmp_whrc_data.json", 'r') as f:
for line in f.readlines():
written_data.append(json.loads(line))
# Final table needs row names of 'biomassdensity','gid_0','id_1','id_2','totalbiomass','areaHa'. Use rename function below
output_df = pd.DataFrame(written_data)
output_df.head()
len(output_df)
output_df.keys()
output_df = output_df.rename(index=str, columns={'admin_1':'id_1','admin_2':'id_2','biomassDensity':'biomassdensity','totalBiomass':'totalbiomass'})
output_df.head()
# Finally, save the file
output_df.to_csv('./whrc_biomass.csv')
###Output
_____no_output_____ |
Freshworks_Task.ipynb | ###Markdown
Server
###Code
map={} # global data storage
def create(key,value,timeout=0): # timeout provided in seconds
if key in map:
print("Error !! Key is already stored") #error mmsg
else:
if(key.isalpha()): # string key #1073741824 bytes ==1 Gb
if sys.getsizeof(map)<(1073741824) and sys.getsizeof(value)<=(16*1024): #Check file size<=1GB and Json obj size<=16kb
if timeout==0:
l=[value,-1]
else:
l=[value,time.time()+timeout] #adding timeout incase its not zero
if len(key)<=32:# key is max of 32 chars
map[key]=l
else:
print("Error !! Memory limit")#error mssg
else:
print("Error !! Key should have alphabet only")#error mssg
def delete(key):
if key not in map:
print("Error !! Key is not in Database") #error mssg
else:
list=map[key]
if list[1]!=-1: # time to live parameter isnt -1(means its provided by user)
current_time=time.time()
if current_time<list[1]: #Expiry & current time compared
del map[key]
print("Success! key is now deleted")
else:
print("Error !! time to live off expired") #error as time to live has expired so cant delete it
else:# time to live is -1 then just delete the key
del map[key]
print("Success! key is now deleted")
def read(key):
if key not in map:
print("Error !! Key is not in Database") #error mssg
else:
list=map[key]
if list[1]!=-1: # time to live parameter isnt -1(means its provided by user)
current_time=time.time()
if current_time<list[1]:#Expiry & current time compared
mapping=str(key)+" : "+str(list[0]) # Key - JSon pair returned from DB
return mapping
else:
print("Error !! time to live off expired") #error mssg
else:
mapping=str(key)+" : "+str(list[0])
return mapping
###Output
_____no_output_____
###Markdown
Client Testcase 1
###Code
json1={ "brand": "Ford",
"model": "Mustang",
"year": 1964}
create("car",json1)
#to create a key with key & json obj given and no time-to-live property
json2=[23,12]
create("Money",json2,200)
#to create a key with key & json obj given and with time-to-live property value given(number of seconds)
print(read("car"))
#PRINTS key in Json object format 'key_name:value'
print(read("Money"))
#PRINTS key in Json object format 'key_name:value' if the (time to live) is not expired else it throws an ERROR !
json3={"32":"google"}
create("car",json3)
#it returns an error since the key_name already present in datastore
delete("car")
#it deletes the given key & json obj from datastore
# #Using Multi threading
json4=["New","Year"]
thread1=Thread(target=(create),args=("moker",json4)) #as per the operation
thread1.start()
thread2=Thread(target=(delete),args=("moker",)) #as per the operation
thread2.start()
print("Final datastore",map)
###Output
car : {'brand': 'Ford', 'model': 'Mustang', 'year': 1964}
Money : [23, 12]
Error !! Key is already stored
Success! key is now deleted
Success! key is now deleted
Final datastore {'Money': [[23, 12], 1609154201.7596617]}
###Markdown
Test Case 2
###Code
delete("just_key")
json1={ "brand": "Ford",
"model": "Mustang",
"year": 1964}
create("cars24",json1)
#Error! as alphanumeric key with key & json obj given and no time-to-live property
json2=[23,12]
create("Money",json2,10)
#to create a key with key & json obj given and with time-to-live property value given(number of seconds) as just 10 secs
print(read("Money"))
#PRINTS key in Json object format 'key_name:value' if the (time to live) is not expired else it throws an ERROR !
json3={"10":"FreshWork"}
create("TechCos",json3)
print("Final datastore",map)
print(read("Money"))
# throws error as run after 10 sec(time to live expired)
###Output
Error !! time to live off expired
None
|
analysis/alessandro_pisa/.ipynb_checkpoints/milestone2-checkpoint.ipynb | ###Markdown
Edibility of Mushrooms ---Exploring the different features of mushrooms with the hope of being able to identify edible versus posinous mushrooms. The data describes if the mushrooms are definitely edible or if they are poisonous or if it's unknown, in which case they are grouped with the poisonous category ("When in doubt throw them out"). Furthermore, the data goes over the observable physical features of the hypothetical mushrooms, such as gill size and spacing, odor, cap color, and much more. The data provides 8124 samples, recording 23 different parameters.The data is originally from this dataset in kaggle: [Original Data](https://www.kaggle.com/uciml/mushroom-classification) Importing Modules
###Code
# !pip install pandas seaborn numpy matplotlib # Uncomment if modules are not found
import pandas as pd
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
from scripts import project_functions
###Output
_____no_output_____
###Markdown
Load and Preprocess Data
###Code
df = project_functions.load_and_process_data("../../data/raw/mushrooms.csv")
df
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis ---In the following visualizations I will be comparing the features of each mushroom with the amount of poisonous/edible mushrooms that contain that specific characteristic. This is in hope of noticing prevalant feautures that can let us know if a musroom either deffinitley poisonous or edible. I have separated each characteristic into related groups including: cap, gill, stalk, veil, ring, life and miscellanious features for easier analysis. Cap Related Features ---Overall when it comes to the Cap of the mushroom there is really not enough prevelant distinctions to determine if a mushroom is poisonous or not. Many of the features are shared between both edible and poisonous mushrooms, for example: a convex cap shape, a scaly surface or a brown mushroom cap. Nontheless, there are some characteristics — like a knobbed cap shape and red or yellow caps — that seem to be a lot more common between poisonous mushrooms, so it might be better to play it safe and not eat those.
###Code
project_functions.show_cap_related_features(df)
###Output
_____no_output_____
###Markdown
Gill Related Features ---When it comes to the gill of the mushroom there are actually a couple carachteristics than can help us distingush between an edible or poisonous mushroom. The most prevelant one is the gill color, as it appears that green and buff colored gill's are really good indicators for the toxicity of a particular mushroom. Similarly, if a mushroom has a narrow gill it would be better to assume it as toxic as there is more than double the amount of poisonous mushrooms with narrow gills than edible. As for the other characteristics there is not enough distinction to be able to tell by that characteristic alone.
###Code
project_functions.show_gill_related_features(df)
###Output
_____no_output_____
###Markdown
Stalk Related Feautures ---For the stalk of the mushroom there seems to be a relation with the roughness of the stalk both above and below the ring as well as with the color of the stalk. Overall mushrooms with a silky stalk colored either cinammon, yellow, or buff (both above and below the ring) appear to be overall poisonous. In this case the better indicator is the color, as if the stalk happens to be of one of those colors it is most definitley poisonous, but they are more rare, as other colors seem to be more likeley. Another notable characteristic is that mushrooms with a rooted stalk are very likely to be edible, as well as mushrooms with gray or red stalks (both above and below ring).
###Code
project_functions.show_stalk_related_features(df)
###Output
_____no_output_____
###Markdown
Veil Related Fautures ---In the case of the veil, mushrooms with an orange or brown veil seem to be edible, altough they are quite rare. When it comes to a partial veil both poisonous and edible mushrooms share this characteristic.
###Code
project_functions.show_veil_related_features(df)
###Output
_____no_output_____
###Markdown
Ring Related Feautures ---As for the ring of the mushrooms, it appears that mushrooms with no rings, or large rings are poisonous. This appears to be a good indicator for the toxicity. As for the edibility of mushrooms if a mushroom has a flaring ring type is likely for it to be edible, and similarly (but not always) a pendant ring could suggest an edible mushroom.
###Code
project_functions.show_ring_related_features(df)
###Output
_____no_output_____
###Markdown
Life Related Features ---For the population type and habitat of the mushrooms there are some characteristics that allow us to know if the mushroom is edible or not. For example if a mushroom has an abundant or numerous population the mushroom is most likely edible, similiarly if it grows on waste funny enough the mushroom is edible. As for poisonous mushrooms it is better to not eat any mushrooms found on paths as the majority of these seem to be poisonous.
###Code
project_functions.show_life_related_features(df)
###Output
_____no_output_____
###Markdown
Miscellaneous Features ---Surprisingly this category has some of the most telling signs for the edibility of a mushroom. When it comes to the other of the mushroom most mushrooms that smell like something are likely to be poisonous, but more specifically any mushroom that smells spicy, fishy, foul, pungent, musty, or, treosote will be poisonous while most mushrooms with no smell are edible. Another good sign for poisonous mushrooms are the color of the spore print, as any mushrooms with green, or chocolate colored spores prints are highly likely to be poisonous. When it comes to the bruises most edible mushrooms have them while poisounous do not, but this is not always the case so its not such a good characteristic to tell them apart.
###Code
project_functions.show_miscellaneous_features(df)
###Output
_____no_output_____ |
figure2_create_plots.ipynb | ###Markdown
###Code
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp
import numpy as np
from algae_population import *
SMALL_SIZE = 16
MEDIUM_SIZE = 18
BIGGER_SIZE = 20
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
np.set_printoptions(formatter={'float': lambda x: "{0:0.3f}".format(x)})
# %matplotlib tk
import pickle
# solutions = pickle.load(open('figure1.p','rb'))
solutions = pickle.load(open('figure2.p','rb'))
# def figure1(solutions, tend=None, K = 10):
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
fig, ax = plt.subplots(1,3,figsize=(18,6))
axins1 = inset_axes(ax[0], width="35%", height="35%",loc=7)
axins2 = inset_axes(ax[2], width="35%", height="35%",loc=7)
for sol in solutions:
t0 = sol.t[0]
if tend is None:
tend = sol.t[-1]
if sol.t_events[0].size > 0 and sol.t_events[0] < tend:
print(f'sporulation event at {sol.t_events[0]}')
tend = sol.t_events[0]
t = np.arange(t0, tend)
z = sol.sol(t)
# fig,ax = plt.subplots(1,3,figsize=(20,6))
# ax[0].plot(t, z[:-1,:].T,'-o')
# ax[0].set_ylabel('Age $a_i$')
# ax[0].set_xlabel('days')
# # ax[0].legend(['a0', 'a1', 'a2'], shadow=True)
# ax[0].set_title('Population age evolution')
# mass and inhibitor
biomass = z[:-1, :]
I = z[-1,:]
# what we gain is:
_yield = np.sum( biomass.T - biomass[:,0], axis=1)
ax[0].plot(t, _yield,'-',label = sol['s'][0])
# _yield[_yield==0] = 0.001
# ax[0].plot(t, np.log(_yield),'-o',label = sol['s'][0])
ax[0].set_xlabel('days')
ax[0].set_ylabel(r'Yield kg/m$^3$')
# ax.set_title('Total biomass')
ax[0].set_ylim([-1, 11])
ax[0].set_xlim([0,100])
# ax[0].set_yscale('symlog')
# ax[0].legend()
ax[0].text(2.1, 9.5, 'a)', size=14)
axins1.plot(t[:10], _yield[:10],'-')
# if sol.t_events[0].size > 0:
# ax[0].annotate('sporulation', xy=(tend, 0), xycoords='data',
# xytext=(tend, 0.05),
# arrowprops=dict(arrowstyle="->",
# connectionstyle="arc3", color='red')
# )
ax[2].plot(t,I,'-',label= sol['s'][0])
ax[2].set_xlabel('days')
ax[2].set_ylabel(r'$I$')
ax[2].plot([0,120],[1.8, 1.8],'k--',lw=0.1)
ax[2].set_xlim([0,100])
# ax[1].set_yscale('symlog')
# ax[1].set_title("Inhibitor")
ax[2].text(10,1.65, 'c)',fontsize=14)
axins2.plot(t[:10], I[:10])
ind = np.argmax(_yield >= 0.9*9.8)
# the percentage of youngs
youngs = int(sol['s'][0].split('/')[0])
# print(youngs)
settling_time = t[ind]
if settling_time == 0:
settling_time = np.nan
# ax[2].plot(t, np.cumsum(_yield)/biomass[:,0].sum(),'-',label = sol['s'][0])
# if settling_time > 0:
ax[1].plot(youngs, settling_time,'o-', label = sol['s'][0])
# _yield[_yield==0] = 0.001
# ax[0].plot(t, np.log(_yield),'-o',)
ax[1].set_xlabel('Percentage of young')
ax[1].set_ylabel(r'Time to 90\%')
# ax[2].set_xlim([0,100])
# ax.set_title('Total biomass')
# ax[0].set_ylim([-1, 11])
# ax[0].set_xlim([0,100])
# ax[0].set_yscale('symlog')
ax[1].legend()
fmt = mpl.ticker.StrMethodFormatter("{x:g}")
ax[0].yaxis.set_major_formatter(fmt)
ax[0].yaxis.set_minor_formatter(fmt)
ax[1].yaxis.set_major_formatter(fmt)
ax[1].yaxis.set_minor_formatter(fmt)
ax[1].text(20,57, 'b)',fontsize=14)
# ax[0].legend(bbox_to_anchor=(1.5, 1.0))
plt.show()
# return fig, ax
fig.savefig('figure2.png',dpi=300, bbox_inches='tight',
transparent=True,
pad_inches=0)
settling_times = []
for sol in solutions:
t0 = sol.t[0]
if tend is None:
tend = sol.t[-1]
if sol.t_events[0].size > 0 and sol.t_events[0] < tend:
print(f'sporulation event at {sol.t_events[0]}')
tend = sol.t_events[0]
t = np.arange(t0, tend)
z = sol.sol(t)
# mass and inhibitor
biomass = z[:-1, :]
I = z[-1,:]
# what we gain is:
_yield = np.sum( biomass.T - biomass[:,0], axis=1)
ind = np.argmax(_yield >= 0.9*9.8)
youngs = int(sol['s'][0].split('/')[0])
print(youngs)
settling_time = t[ind]
if settling_time == 0:
settling_time = np.nan
settling_times.append(settling_time)
###Output
100
90
80
70
60
50
40
30
20
10
0
|
notebooks/B02_ML_Examples.ipynb | ###Markdown
ML model examples
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
sns.set_context('notebook', font_scale=1.5)
###Output
_____no_output_____
###Markdown
Dimension reduction
###Code
from sklearn.datasets import load_breast_cancer
bc = load_breast_cancer(as_frame=True)
bc.data.head()
bc.target_names
bc.target.head()
! python3 -m pip install --quiet umap-learn
! python3 -m pip install --quiet phate
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
from umap import UMAP
from phate import PHATE
dr_models = {
'PCA': PCA(),
't-SNE': TSNE(),
'UMAP': UMAP(),
'PHATE': PHATE(verbose=0),
}
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
fig, axes = plt.subplots(2,2,figsize=(8,8))
axes = axes.ravel()
for i, (k, v) in enumerate(dr_models.items()):
X = v.fit_transform(scaler.fit_transform(bc.data))
target = bc.target
ax = axes[i]
ax.scatter(X[:, 0], X[:, 1], c=target)
ax.set_xlabel(f'{k}1')
ax.set_ylabel(f'{k}2')
ax.set_xticks([])
ax.set_yticks([])
###Output
_____no_output_____
###Markdown
A3.2 Clustering- K-means- Agglomerative hierarchical clustering- Mixture models
###Code
from sklearn.cluster import KMeans, AgglomerativeClustering, DBSCAN
from sklearn.mixture import GaussianMixture
cl_models = {
'true': None,
'k-means': KMeans(n_clusters=2),
'ahc': AgglomerativeClustering(n_clusters=2),
'gmm': GaussianMixture(n_components=2),
}
pca = PCA()
X = pca.fit_transform(scaler.fit_transform(bc.data))
fig, axes = plt.subplots(2,2,figsize=(8, 8))
axes = axes.ravel()
for i, (k, v) in enumerate(cl_models.items()):
if i == 0:
y = bc.target
else:
y = v.fit_predict(scaler.fit_transform(bc.data))
target = y
ax = axes[i]
ax.scatter(X[:, 0], X[:, 1], c=target)
ax.set_xlabel('PC1')
ax.set_ylabel('PC2')
ax.set_xticks([])
ax.set_yticks([])
ax.set_title(k)
###Output
_____no_output_____
###Markdown
A3.3 Supervised learning - Nearest neighbor- Linear models- Support vector machines- Trees- Neural networks
###Code
from sklearn.model_selection import train_test_split
from sklearn.dummy import DummyClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.neural_network import MLPClassifier
###Output
_____no_output_____
###Markdown
Proprocess data
###Code
X = bc.data
y = bc.target
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0, stratify=y)
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
pd.Series(y_test).value_counts(normalize=True)
sl_modles = dict(
dummy = DummyClassifier(strategy='prior'),
knn = KNeighborsClassifier(),
lr = LogisticRegression(),
svc = SVC(),
nn = MLPClassifier(max_iter=500),
)
for name, clf in sl_modles.items():
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test)
print(f'{name}: {score:.3f}')
###Output
_____no_output_____
###Markdown
ML model examples
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
sns.set_context('notebook', font_scale=1.5)
###Output
_____no_output_____
###Markdown
Dimension reduction
###Code
from sklearn.datasets import load_breast_cancer
bc = load_breast_cancer(as_frame=True)
bc.data.head()
bc.target_names
bc.target.head()
%%capture
! python3 -m pip install --quiet umap-learn
! python3 -m pip install --quiet phate
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
from umap import UMAP
dr_models = {
'PCA': PCA(),
't-SNE': TSNE(),
'UMAP': UMAP(),
}
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
fig, axes = plt.subplots(1,3,figsize=(12,4))
axes = axes.ravel()
for i, (k, v) in enumerate(dr_models.items()):
X = v.fit_transform(scaler.fit_transform(bc.data))
target = bc.target
ax = axes[i]
ax.scatter(X[:, 0], X[:, 1], c=target)
ax.set_xlabel(f'{k}1')
ax.set_ylabel(f'{k}2')
ax.set_xticks([])
ax.set_yticks([])
###Output
_____no_output_____
###Markdown
A3.2 Clustering- K-means- Agglomerative hierarchical clustering- Mixture models
###Code
from sklearn.cluster import KMeans, AgglomerativeClustering, DBSCAN
from sklearn.mixture import GaussianMixture
cl_models = {
'true': None,
'k-means': KMeans(n_clusters=2),
'ahc': AgglomerativeClustering(n_clusters=2),
'gmm': GaussianMixture(n_components=2),
}
pca = PCA()
X = pca.fit_transform(scaler.fit_transform(bc.data))
fig, axes = plt.subplots(2,2,figsize=(8, 8))
axes = axes.ravel()
for i, (k, v) in enumerate(cl_models.items()):
if i == 0:
y = bc.target
else:
y = v.fit_predict(scaler.fit_transform(bc.data))
target = y
ax = axes[i]
ax.scatter(X[:, 0], X[:, 1], c=target)
ax.set_xlabel('PC1')
ax.set_ylabel('PC2')
ax.set_xticks([])
ax.set_yticks([])
ax.set_title(k)
###Output
_____no_output_____
###Markdown
A3.3 Supervised learning - Nearest neighbor- Linear models- Support vector machines- Trees- Neural networks
###Code
from sklearn.model_selection import train_test_split
from sklearn.dummy import DummyClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.neural_network import MLPClassifier
###Output
_____no_output_____
###Markdown
Proprocess data
###Code
X = bc.data
y = bc.target
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0, stratify=y)
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
pd.Series(y_test).value_counts(normalize=True)
sl_modles = dict(
dummy = DummyClassifier(strategy='prior'),
knn = KNeighborsClassifier(),
lr = LogisticRegression(),
svc = SVC(),
nn = MLPClassifier(max_iter=500),
)
for name, clf in sl_modles.items():
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test)
print(f'{name}: {score:.3f}')
###Output
_____no_output_____
###Markdown
ML model examples
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
sns.set_context('notebook', font_scale=1.5)
###Output
_____no_output_____
###Markdown
Dimension reduction
###Code
from sklearn.datasets import load_breast_cancer
bc = load_breast_cancer(as_frame=True)
bc.data.head()
bc.target_names
bc.target.head()
! python3 -m pip install --quiet umap-learn
! python3 -m pip install --quiet phate
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
from umap import UMAP
from phate import PHATE
dr_models = {
'PCA': PCA(),
't-SNE': TSNE(),
'UMAP': UMAP(),
'PHATE': PHATE(verbose=0),
}
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
fig, axes = plt.subplots(2,2,figsize=(8,8))
axes = axes.ravel()
for i, (k, v) in enumerate(dr_models.items()):
X = v.fit_transform(scaler.fit_transform(bc.data))
target = bc.target
ax = axes[i]
ax.scatter(X[:, 0], X[:, 1], c=target)
ax.set_xlabel(f'{k}1')
ax.set_ylabel(f'{k}2')
ax.set_xticks([])
ax.set_yticks([])
###Output
_____no_output_____
###Markdown
A3.2 Clustering- K-means- Agglomerative hierarchical clustering- Mixture models
###Code
from sklearn.cluster import KMeans, AgglomerativeClustering, DBSCAN
from sklearn.mixture import GaussianMixture
cl_models = {
'true': None,
'k-means': KMeans(n_clusters=2),
'ahc': AgglomerativeClustering(n_clusters=2),
'gmm': GaussianMixture(n_components=2),
}
pca = PCA()
X = pca.fit_transform(scaler.fit_transform(bc.data))
fig, axes = plt.subplots(2,2,figsize=(8, 8))
axes = axes.ravel()
for i, (k, v) in enumerate(cl_models.items()):
if i == 0:
y = bc.target
else:
y = v.fit_predict(scaler.fit_transform(bc.data))
target = y
ax = axes[i]
ax.scatter(X[:, 0], X[:, 1], c=target)
ax.set_xlabel('PC1')
ax.set_ylabel('PC2')
ax.set_xticks([])
ax.set_yticks([])
ax.set_title(k)
###Output
_____no_output_____
###Markdown
A3.3 Supervised learning - Nearest neighbor- Linear models- Support vector machines- Trees- Neural networks
###Code
from sklearn.model_selection import train_test_split
from sklearn.dummy import DummyClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.neural_network import MLPClassifier
###Output
_____no_output_____
###Markdown
Proprocess data
###Code
X = bc.data
y = bc.target
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0, stratify=y)
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
pd.Series(y_test).value_counts(normalize=True)
sl_modles = dict(
dummy = DummyClassifier(strategy='prior'),
knn = KNeighborsClassifier(),
lr = LogisticRegression(),
svc = SVC(),
nn = MLPClassifier(max_iter=500),
)
for name, clf in sl_modles.items():
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test)
print(f'{name}: {score:.3f}')
###Output
_____no_output_____ |
_notebooks/ML_Model2.ipynb | ###Markdown
ML_Model2--Titanic Case 0. Background Infokaggle's case:Titanic - Machine Learning from Disasterinformation link:https://www.kaggle.com/c/titanic/overview
###Code
# import package
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score, f1_score,roc_auc_score
from sklearn import tree
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
from sklearn.ensemble import RandomForestClassifier,GradientBoostingClassifier
from sklearn.linear_model import LogisticRegression,SGDClassifier
from sklearn.naive_bayes import GaussianNB
from xgboost import XGBClassifier
###Output
_____no_output_____
###Markdown
1. Load the data
###Code
#Load the data
data1 = pd.read_csv("train.csv")
data2 = pd.read_csv("test.csv")
data3 = pd.read_csv("gender_submission.csv")
data4=pd.merge(data3,data2)
data=pd.concat([data1,data4],axis=0)
data=data.reset_index()
data.head()
###Output
_____no_output_____
###Markdown
2. Pre-process the data (aka data wrangling) 1, Data cleanning
###Code
# drop the unrelated columns
data.drop(['PassengerId','Cabin','Ticket'],axis=1,inplace=True)
data.head()
###Output
_____no_output_____
###Markdown
2, Identification and treatment of missing values and outliers.
###Code
# find the missing value
data.isnull().sum()
#Find the null value in Fare catergory and fill with mean value
data[data['Embarked'].isnull()]
# Miss. Amelie and Mrs. George Nelson was embarked with 'S',since the search from
#https://www.encyclopedia-titanica.org/titanic-survivor/martha-evelyn-stone.html
data['Embarked'] = data['Embarked'].fillna('S')
data.corr()
# find the missing value in fare column
data[data['Fare'].isnull()]
#fill NA value within "Fare" column
data['Fare'] = data['Fare'].fillna(data.groupby(['Pclass'])['Fare'].mean()[3])
# Since ['age'] has no large empty value,
#so fill the age with mean value
data['Age'].fillna(data['Age'].mean(), inplace = True)
# Check each numerical,compare the mean,max,min
data.describe()
# for the descibe table, the fare have the outlier more 400
sns.boxplot(x="Survived", y="Fare", data=data)
# Remove the outlier of Fare with more than 400
data.drop(data[data.Fare > 400].index, inplace=True)
###Output
_____no_output_____
###Markdown
3, Feature engineering
###Code
# encoding the sex (categorical variable)
table1=pd.get_dummies(data['Sex'])
data=pd.concat([data, table1], axis=1)
# encoding the embarked (categorical variable)
table2=pd.get_dummies(data['Embarked'])
data=pd.concat([data, table2], axis=1)
###Output
_____no_output_____
###Markdown
3. Exploratory data analysis. 1, At least two plots describing different aspects of the data set (e.g. identifying outliers, histograms of different distributions, or scatter plots to explore correlations).
###Code
# heatmap for correlations
table3=data.drop(['Name','Sex','Embarked'],axis=1)
plt.figure(figsize=(8,8))
sns.heatmap(table3.astype(float).corr(), mask=np.triu(table3.astype(float).corr()), cmap = sns.diverging_palette(230, 20, as_cmap=True), annot=True, fmt='.1g', square=True, linewidths=.5, cbar_kws={"shrink": .5})
#the relationship between survival and categorical datad catergircal data(sex,Embarked)
sns.pointplot(x="Embarked", y="Survived", hue="Sex", kind="box", data=data,palette="Set3")
###Output
_____no_output_____
###Markdown
2, Print a basic data description (e.g. number of examples, number features, number of examples in each class and such).
###Code
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 1305 entries, 0 to 1308
Data columns (total 15 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 index 1305 non-null int64
1 Survived 1305 non-null int64
2 Pclass 1305 non-null int64
3 Name 1305 non-null object
4 Sex 1305 non-null object
5 Age 1305 non-null float64
6 SibSp 1305 non-null int64
7 Parch 1305 non-null int64
8 Fare 1305 non-null float64
9 Embarked 1305 non-null object
10 female 1305 non-null uint8
11 male 1305 non-null uint8
12 C 1305 non-null uint8
13 Q 1305 non-null uint8
14 S 1305 non-null uint8
dtypes: float64(2), int64(5), object(3), uint8(5)
memory usage: 158.5+ KB
###Markdown
3, Print (or include in the plots) descriptive statistics (e.g. means, medians, standard deviation)
###Code
data.describe()
###Output
_____no_output_____
###Markdown
4. Partition data into train, validation and test sets. From Lecture06.slide:\training set: 60% of total data set 1305*0.6= 783 \Validation set: 20% of total data set 1305*0.2 = 261 \Testing setzz: 20% of total data set 1305*0.2=261
###Code
train_data=data[:783]
valid_data=data[783:1044]
test_data=data[1044:]
###Output
_____no_output_____
###Markdown
5. Fit models on the training set (this can include a hyper-parameter search) and select the best based on validation set performance. 1,building the machine learning model for both test and valid data
###Code
def build_x(df):
return StandardScaler().fit_transform(df.drop(columns=['Name','Sex','Embarked','index','Survived']))
train_x=build_x(train_data)
valid_x=build_x(valid_data)
test_x=build_x(test_data)
train_y = train_data['Survived'].values
valid_y = valid_data['Survived'].values
test_y = test_data['Survived'].values
###Output
_____no_output_____
###Markdown
2, runing into different model
###Code
#Decision Tree Classifier
parameters={'criterion':('gini','entropy'),
'splitter':('random','best'),'max_depth':range(1,5)}
clf=tree.DecisionTreeClassifier(random_state=30)
clf_gs=GridSearchCV(clf,parameters)
clf_gs=clf_gs.fit(train_x,train_y)
clf_score=clf_gs.score(valid_x,valid_y)
#Random Forest Classifier
parameters={'criterion':('gini','entropy'),
'max_features':('auto','sqrt','log2'),'max_depth':range(1,5)}
random_forest=RandomForestClassifier()
random_forest_rs=RandomizedSearchCV(random_forest,parameters)
random_forest_rs=random_forest_rs.fit(train_x,train_y)
random_forest_score=random_forest_rs.score(valid_x,valid_y)
#Gradient Boosting Classifier
Gradient_Boosting=GradientBoostingClassifier().fit(train_x,train_y)
Gradient_Boosting_score=Gradient_Boosting.score(valid_x,valid_y)
#Logistic Regression
parameters={'solver':('newton-cg', 'lbfgs', 'liblinear', 'sag', 'saga')}
logis_R=LogisticRegression()
logis_R_gs=GridSearchCV(logis_R,parameters)
logis_R_gs=logis_R_gs.fit(train_x,train_y)
logis_R_score=logis_R_gs.score(valid_x,valid_y)
#Gaussian Naive Bayes(GNB)
GNB=GaussianNB().fit(train_x,train_y)
GNB.score=GNB.score(valid_x,valid_y)
#Stochastic Gradient Descent (SGD)
parameters={'loss':('deviance','exponential'),'learning_rate':[0.01,0.05,0.1,0.2],'n_estimators':[50,100,150]}
SGD=GradientBoostingClassifier()
SGD_gs=GridSearchCV(SGD,parameters)
SGD_gs=SGD_gs.fit(train_x,train_y)
SGD_score=SGD_gs.score(valid_x,valid_y)
SGD_score
#xgboost
Xgboost=XGBClassifier().fit(train_x,train_y)
Xgboost_score=Xgboost.score(valid_x,valid_y)
###Output
_____no_output_____
###Markdown
3, select the table from best performance of validation
###Code
results = pd.DataFrame({
'Model': ['Decision Tree', 'Random Forest Classifier','Gradient Boosting',
'Logistic Regression','Gaussian Naive Bayes','Stochastic Gradient Decent',
'xgbooste'],
'Score': [clf_score,random_forest_score,Gradient_Boosting_score,
logis_R_score,GNB.score,SGD_score,Xgboost_score]})
result_df = results.sort_values(by='Score', ascending=False)
result_df = result_df.set_index('Score')
print(result_df)
###Output
Model
Score
0.915709 Random Forest Classifier
0.892720 Gaussian Naive Bayes
0.885057 Logistic Regression
0.865900 Decision Tree
0.865900 Gradient Boosting
0.858238 xgbooste
0.850575 Stochastic Gradient Decent
###Markdown
6. Print the results of the final model on the test set. This should include accuracy, F1-score and AUC.
###Code
#find the predicted value from test_data
Y_prediction = random_forest_rs.predict(test_x)
#Accuracy
accuracy=accuracy_score(test_y, Y_prediction)
print('accuracy:', accuracy)
# F1-score
f1_score=f1_score(test_y, Y_prediction)
print('F1 score:',f1_score )
# AUC score
y_scores = random_forest_rs.predict_proba(test_x)[:,1]
r_a_score = roc_auc_score(test_y, y_scores)
print("ROC-AUC-Score:", r_a_score)
Final_result = pd.DataFrame({
'Indicator': ['Accuracy','F1 score','AUC Score'],
'Score': [accuracy,f1_score,r_a_score]})
print(Final_result)
###Output
Indicator Score
0 Accuracy 0.969349
1 F1 score 0.957447
2 AUC Score 0.995987
|
lessons/03_Simulated_Sky_Signal/simsky_timedomain.ipynb | ###Markdown
Simulated Sky Signal in time domainIn this lesson we will use the TOAST Operator `OpSimPySM` to create timestreams for an instrument given a sky model.
###Code
# Load common tools for all lessons
import sys
sys.path.insert(0, "..")
from lesson_tools import (
fake_focalplane
)
# Capture C++ output in the jupyter cells
%reload_ext wurlitzer
import toast
import healpy as hp
import numpy as np
env = toast.Environment.get()
env.set_log_level("DEBUG")
###Output
_____no_output_____
###Markdown
Scanning strategyBefore being able to scan a map into a timestream we need to define a scanning strategyand get pointing information for each channel.We use the same **satellite** scanning used in lesson 2 about scanning strategies,see the `02_Simulated_Scan_Strategies/simscan_satellite.ipynb` for more details.
###Code
focal_plane = fake_focalplane()
focal_plane.keys()
focal_plane["0A"]["fwhm_arcmin"]
# Scan parameters
alpha = 50.0 # precession opening angle, degrees
beta = 45.0 # spin opening angle, degrees
p_alpha = 25.0 # precession period, minutes
p_beta = 1.25 # spin period, minutes
samplerate = 0.5 # sample rate, Hz
hwprpm = 5.0 # HWP rotation in RPM
nside = 64 # Healpix NSIDE
# We will use one observation per day, with no gaps in between, and
# run for one year.
obs_samples = int(24 * 3600.0 * samplerate) - 1
nobs = 366
# Slew the precession axis so that it completes one circle
deg_per_day = 360.0 / nobs
from toast.todmap import TODSatellite, slew_precession_axis
detquat = {ch: focal_plane[ch]["quat"] for ch in focal_plane}
# Create distributed data
comm = toast.Comm()
data = toast.Data(comm)
# Append observations
for ob in range(nobs):
obsname = "{:03d}".format(ob)
obsfirst = ob * (obs_samples + 1)
obsstart = 24 * 3600.0
tod = TODSatellite(
comm.comm_group,
detquat,
obs_samples,
firstsamp=obsfirst,
firsttime=obsstart,
rate=samplerate,
spinperiod=p_beta,
spinangle=beta,
precperiod=p_alpha,
precangle=alpha,
coord="E",
hwprpm=hwprpm
)
qprec = np.empty(4 * tod.local_samples[1], dtype=np.float64).reshape((-1, 4))
slew_precession_axis(
qprec,
firstsamp=obsfirst,
samplerate=samplerate,
degday=deg_per_day,
)
tod.set_prec_axis(qprec=qprec)
obs = dict()
obs["tod"] = tod
data.obs.append(obs)
from toast.todmap import (
OpPointingHpix,
OpAccumDiag
)
from toast.map import (
DistPixels
)
# Make a simple pointing matrix
pointing = OpPointingHpix(nside=nside, nest=True, mode="IQU")
pointing.exec(data)
# Construct a distributed map to store the hit map
npix = 12 * nside**2
hits = DistPixels(
data,
nnz=1,
dtype=np.int64,
)
hits.data.fill(0)
# Accumulate the hit map locally
build_hits = OpAccumDiag(hits=hits)
build_hits.exec(data)
# Reduce the map across processes (a No-op in this case)
hits.allreduce()
%matplotlib inline
hp.mollview(hits.data.flatten(), nest=True)
###Output
_____no_output_____
###Markdown
Define PySM parameters and instrument bandpassesThen we define the sky model parameters, choosing the desired set of `PySM` models and then we specify the band center and the bandwidth for a top-hat bandpass.Currently top-hat bandpasses are the only type supported by the operator, in the future we will implement arbitrary bandpasses.Then bandpass parameters can be added directly to the `focal_plane` dictionary:
###Code
for ch in focal_plane:
focal_plane[ch]["bandcenter_ghz"] = 70
focal_plane[ch]["bandwidth_ghz"] = 10
focal_plane[ch]["fwhm"] = 60*2
pysm_sky_config = ["s1", "f1", "a1", "d1"]
###Output
_____no_output_____
###Markdown
Run the OpSimPySM operatorThe `OpSimPySM` operator: * Creates top-hat bandpasses arrays (frequency axis and weights) as expected by `PySM` * Loops by channel and for each: * Creates a `PySMSky` object just with 1 channel at a time * Executes `PySMSky` to evaluate the sky models and bandpass-integrate * Calls `PySM` to perform distributed smoothing with `libsharp` * Gathers the map on the first MPI process * Applies coordinate transformation if necessary (not currently implemented in `libsharp`) * Use the `DistMap` object to communicate to each process the part of the sky they observe * Calls `OpSimScan` to rescan the map to a timeline
###Code
from toast.todmap import OpSimPySM
OpSimPySM?
opsim_pysm = OpSimPySM(
data,
comm=None,
pysm_model=pysm_sky_config,
apply_beam=True,
debug=True,
focalplanes=[focal_plane],
)
opsim_pysm.exec(data)
###Output
_____no_output_____
###Markdown
Plot output timelines
###Code
%matplotlib inline
import matplotlib.pyplot as plt
tod = data.obs[0]['tod']
pix = tod.cache.reference("pixels_0A")
import toast.qarray as qa
theta, phi, pa = qa.to_angles(tod.read_pntg(detector="0A"))
pix
num = 10000
plt.figure(figsize=(7, 5))
plt.plot(np.degrees(theta[:num]), tod.cache.reference("signal_0A")[:num], ".")
plt.xlabel("$Colatitude [deg]$")
plt.ylabel("$Signal [ \mu K_{RJ} ]$");
###Output
_____no_output_____
###Markdown
Bin the output to a map
###Code
from numba import njit
@njit
def just_make_me_a_map(output_map, signals):
"""Temperature only binner
Bins a list of (pix, signal) tuples into an output map,
it does not support polarization, so it just averages it out.
Parameters
----------
output_map : np.array
already zeroed output map
signals : numba.typed.List of (np.array[int64] pix, np.array[np.double] signal)
Returns
-------
hits : np.array[np.int64]
hitmap
"""
hits = np.zeros(len(output_map), dtype=np.int64)
for pix, signal in signals:
for p,s in zip(pix, signal):
output_map[p] += s
hits[p] += 1
output_map[hits != 0] /= hits[hits != 0]
return hits
from numba.typed import List
signals = List()
for obs in data.obs:
for ch in focal_plane:
signals.append((obs["tod"].cache.reference("pixels_%s" % ch), obs["tod"].cache.reference("signal_%s" % ch)))
output_map = np.zeros(npix, dtype=np.double)
h = just_make_me_a_map(output_map, signals)
hp.mollview(h, title="hitmap", nest=True)
hp.mollview(output_map, nest=True, min=0, max=1e-3, cmap="coolwarm")
hp.gnomview(output_map, rot=(0,0), xsize=5000, ysize=2000, cmap="coolwarm", nest=True, min=0, max=1e-2)
###Output
_____no_output_____
###Markdown
Simulated Sky Signal in time domainIn this lesson we will use the TOAST Operator `OpSimPySM` to create timestreams for an instrument given a sky model.
###Code
# Load common tools for all lessons
import sys
sys.path.insert(0, "..")
from lesson_tools import (
fake_focalplane
)
# Capture C++ output in the jupyter cells
%reload_ext wurlitzer
import toast
import healpy as hp
import numpy as np
env = toast.Environment.get()
env.set_log_level("DEBUG")
###Output
_____no_output_____
###Markdown
Scanning strategyBefore being able to scan a map into a timestream we need to define a scanning strategyand get pointing information for each channel.We use the same **satellite** scanning used in lesson 2 about scanning strategies,see the `02_Simulated_Scan_Strategies/simscan_satellite.ipynb` for more details.
###Code
focal_plane = fake_focalplane()
focal_plane.keys()
focal_plane["0A"]["fwhm_arcmin"]
# Scan parameters
alpha = 50.0 # precession opening angle, degrees
beta = 45.0 # spin opening angle, degrees
p_alpha = 25.0 # precession period, minutes
p_beta = 1.25 # spin period, minutes
samplerate = 0.5 # sample rate, Hz
hwprpm = 5.0 # HWP rotation in RPM
nside = 64 # Healpix NSIDE
# We will use one observation per day, with no gaps in between, and
# run for one year.
obs_samples = int(24 * 3600.0 * samplerate) - 1
nobs = 366
# Slew the precession axis so that it completes one circle
deg_per_day = 360.0 / nobs
from toast.todmap import TODSatellite, slew_precession_axis
detquat = {ch: focal_plane[ch]["quat"] for ch in focal_plane}
# Create distributed data
comm = toast.Comm()
data = toast.Data(comm)
# Append observations
for ob in range(nobs):
obsname = "{:03d}".format(ob)
obsfirst = ob * (obs_samples + 1)
obsstart = 24 * 3600.0
tod = TODSatellite(
comm.comm_group,
detquat,
obs_samples,
firstsamp=obsfirst,
firsttime=obsstart,
rate=samplerate,
spinperiod=p_beta,
spinangle=beta,
precperiod=p_alpha,
precangle=alpha,
coord="E",
hwprpm=hwprpm
)
qprec = np.empty(4 * tod.local_samples[1], dtype=np.float64).reshape((-1, 4))
slew_precession_axis(
qprec,
firstsamp=obsfirst,
samplerate=samplerate,
degday=deg_per_day,
)
tod.set_prec_axis(qprec=qprec)
obs = dict()
obs["tod"] = tod
data.obs.append(obs)
from toast.todmap import (
get_submaps_nested,
OpPointingHpix,
OpAccumDiag
)
from toast.map import (
DistPixels
)
# Make a simple pointing matrix
pointing = OpPointingHpix(nside=nside, nest=True, mode="IQU")
pointing.exec(data)
# Compute the locally hit pixels
localpix, localsm, subnpix = get_submaps_nested(data, nside)
# Construct a distributed map to store the hit map
npix = 12 * nside**2
hits = DistPixels(
comm=data.comm.comm_world,
size=npix,
nnz=1,
dtype=np.int64,
submap=subnpix,
local=localsm,
)
hits.data.fill(0)
# Accumulate the hit map locally
build_hits = OpAccumDiag(hits=hits)
build_hits.exec(data)
# Reduce the map across processes (a No-op in this case)
hits.allreduce()
%matplotlib inline
hp.mollview(hits.data.flatten(), nest=True)
###Output
_____no_output_____
###Markdown
Define PySM parameters and instrument bandpassesThen we define the sky model parameters, choosing the desired set of `PySM` models and then we specify the band center and the bandwidth for a top-hat bandpass.Currently top-hat bandpasses are the only type supported by the operator, in the future we will implement arbitrary bandpasses.Then bandpass parameters can be added directly to the `focal_plane` dictionary:
###Code
for ch in focal_plane:
focal_plane[ch]["bandcenter_ghz"] = 70
focal_plane[ch]["bandwidth_ghz"] = 10
focal_plane[ch]["fwhm"] = 60*2
pysm_sky_config = ["s1", "f1", "a1", "d1"]
###Output
_____no_output_____
###Markdown
Run the OpSimPySM operatorThe `OpSimPySM` operator: * Creates top-hat bandpasses arrays (frequency axis and weights) as expected by `PySM` * Loops by channel and for each: * Creates a `PySMSky` object just with 1 channel at a time * Executes `PySMSky` to evaluate the sky models and bandpass-integrate * Calls `PySM` to perform distributed smoothing with `libsharp` * Gathers the map on the first MPI process * Applies coordinate transformation if necessary (not currently implemented in `libsharp`) * Use the `DistMap` object to communicate to each process the part of the sky they observe * Calls `OpSimScan` to rescan the map to a timeline
###Code
from toast.todmap import OpSimPySM
OpSimPySM?
opsim_pysm = OpSimPySM(
comm=None,
pysm_model=pysm_sky_config,
nside=nside,
apply_beam=True,
debug=True,
focalplanes=[focal_plane],
subnpix=subnpix,
localsm=localsm
)
opsim_pysm.exec(data)
###Output
_____no_output_____
###Markdown
Plot output timelines
###Code
%matplotlib inline
import matplotlib.pyplot as plt
tod = data.obs[0]['tod']
pix = tod.cache.reference("pixels_0A")
import toast.qarray as qa
theta, phi, pa = qa.to_angles(tod.read_pntg(detector="0A"))
pix
num = 10000
plt.figure(figsize=(7, 5))
plt.plot(np.degrees(theta[:num]), tod.cache.reference("signal_0A")[:num], ".")
plt.xlabel("$Colatitude [deg]$")
plt.ylabel("$Signal [ \mu K_{RJ} ]$");
###Output
_____no_output_____
###Markdown
Bin the output to a map
###Code
from numba import njit
@njit
def just_make_me_a_map(output_map, signals):
"""Temperature only binner
Bins a list of (pix, signal) tuples into an output map,
it does not support polarization, so it just averages it out.
Parameters
----------
output_map : np.array
already zeroed output map
signals : numba.typed.List of (np.array[int64] pix, np.array[np.double] signal)
Returns
-------
hits : np.array[np.int64]
hitmap
"""
hits = np.zeros(len(output_map), dtype=np.int64)
for pix, signal in signals:
for p,s in zip(pix, signal):
output_map[p] += s
hits[p] += 1
output_map[hits != 0] /= hits[hits != 0]
return hits
from numba.typed import List
signals = List()
for obs in data.obs:
for ch in focal_plane:
signals.append((obs["tod"].cache.reference("pixels_%s" % ch), obs["tod"].cache.reference("signal_%s" % ch)))
output_map = np.zeros(npix, dtype=np.double)
h = just_make_me_a_map(output_map, signals)
hp.mollview(h, title="hitmap", nest=True)
hp.mollview(output_map, nest=True, min=0, max=1e-3, cmap="coolwarm")
hp.gnomview(output_map, rot=(0,0), xsize=5000, ysize=2000, cmap="coolwarm", nest=True, min=0, max=1e-2)
###Output
_____no_output_____
###Markdown
Simulated Sky Signal in time domainIn this lesson we will use the TOAST Operator `OpSimPySM` to create timestreams for an instrument given a sky model.
###Code
# Load common tools for all lessons
import sys
sys.path.insert(0, "..")
from lesson_tools import (
fake_focalplane
)
# Capture C++ output in the jupyter cells
%reload_ext wurlitzer
import toast
import healpy as hp
import numpy as np
env = toast.Environment.get()
env.set_log_level("DEBUG")
###Output
_____no_output_____
###Markdown
Scanning strategyBefore being able to scan a map into a timestream we need to define a scanning strategyand get pointing information for each channel.We use the same **satellite** scanning used in lesson 2 about scanning strategies,see the `02_Simulated_Scan_Strategies/simscan_satellite.ipynb` for more details.
###Code
focal_plane = fake_focalplane()
focal_plane.keys()
focal_plane["0A"]["fwhm_arcmin"]
# Scan parameters
alpha = 50.0 # precession opening angle, degrees
beta = 45.0 # spin opening angle, degrees
p_alpha = 25.0 # precession period, minutes
p_beta = 1.25 # spin period, minutes
samplerate = 0.5 # sample rate, Hz
hwprpm = 5.0 # HWP rotation in RPM
nside = 64 # Healpix NSIDE
# We will use one observation per day, with no gaps in between, and
# run for one year.
obs_samples = int(24 * 3600.0 * samplerate) - 1
nobs = 366
# Slew the precession axis so that it completes one circle
deg_per_day = 360.0 / nobs
from toast.todmap import TODSatellite, slew_precession_axis
detquat = {ch: focal_plane[ch]["quat"] for ch in focal_plane}
# Create distributed data
comm = toast.Comm()
data = toast.Data(comm)
# Append observations
for ob in range(nobs):
obsname = "{:03d}".format(ob)
obsfirst = ob * (obs_samples + 1)
obsstart = 24 * 3600.0
tod = TODSatellite(
comm.comm_group,
detquat,
obs_samples,
firstsamp=obsfirst,
firsttime=obsstart,
rate=samplerate,
spinperiod=p_beta,
spinangle=beta,
precperiod=p_alpha,
precangle=alpha,
coord="E",
hwprpm=hwprpm
)
qprec = np.empty(4 * tod.local_samples[1], dtype=np.float64).reshape((-1, 4))
slew_precession_axis(
qprec,
firstsamp=obsfirst,
samplerate=samplerate,
degday=deg_per_day,
)
tod.set_prec_axis(qprec=qprec)
obs = dict()
obs["tod"] = tod
data.obs.append(obs)
from toast.todmap import (
get_submaps_nested,
OpPointingHpix,
OpAccumDiag
)
from toast.map import (
DistPixels
)
# Make a simple pointing matrix
pointing = OpPointingHpix(nside=nside, nest=True, mode="IQU")
pointing.exec(data)
# Compute the locally hit pixels
localpix, localsm, subnpix = get_submaps_nested(data, nside)
# Construct a distributed map to store the hit map
npix = 12 * nside**2
hits = DistPixels(
comm=data.comm.comm_world,
size=npix,
nnz=1,
dtype=np.int64,
submap=subnpix,
local=localsm,
)
hits.data.fill(0)
# Accumulate the hit map locally
build_hits = OpAccumDiag(hits=hits)
build_hits.exec(data)
# Reduce the map across processes (a No-op in this case)
hits.allreduce()
%matplotlib inline
hp.mollview(hits.data.flatten(), nest=True)
###Output
_____no_output_____
###Markdown
Define PySM parameters and instrument bandpassesThen we define the sky model parameters, choosing the desired set of `PySM` models and then we specify the band center and the bandwidth for a top-hat bandpass.Currently top-hat bandpasses are the only type supported by the operator, in the future we will implement arbitrary bandpasses.Then bandpass parameters can be added directly to the `focal_plane` dictionary:
###Code
for ch in focal_plane:
focal_plane[ch]["bandcenter_ghz"] = 70
focal_plane[ch]["bandwidth_ghz"] = 10
focal_plane[ch]["fwhm"] = 60*2
pysm_sky_config = ["s1", "f1", "a1", "d1"] #syncrotron free free a&e and dust components of the sky
###Output
_____no_output_____
###Markdown
Run the OpSimPySM operatorThe `OpSimPySM` operator: * Creates top-hat bandpasses arrays (frequency axis and weights) as expected by `PySM` * Loops by channel and for each: * Creates a `PySMSky` object just with 1 channel at a time * Executes `PySMSky` to evaluate the sky models and bandpass-integrate * Calls `PySM` to perform distributed smoothing with `libsharp` * Gathers the map on the first MPI process * Applies coordinate transformation if necessary (not currently implemented in `libsharp`) * Use the `DistMap` object to communicate to each process the part of the sky they observe * Calls `OpSimScan` to rescan the map to a timeline
###Code
from toast.todmap import OpSimPySM
OpSimPySM?
opsim_pysm = OpSimPySM(
comm=None,
pysm_model=pysm_sky_config,
nside=nside,
apply_beam=True,
debug=True,
focalplanes=[focal_plane],
subnpix=subnpix,
localsm=localsm
)
opsim_pysm.exec(data)
###Output
_____no_output_____
###Markdown
Plot output timelines
###Code
%matplotlib inline
import matplotlib.pyplot as plt
tod = data.obs[0]['tod']
pix = tod.cache.reference("pixels_0A")
import toast.qarray as qa
theta, phi, pa = qa.to_angles(tod.read_pntg(detector="0A"))
#read_pntg gives quaternials and to angles gives theta the colatitude (0 = NP, 180 = SP) and phi and pa
pix
num = 10000
plt.figure(figsize=(7, 5))
plt.plot(np.degrees(theta[:num]), tod.cache.reference("signal_0A")[:num], ".")
plt.xlabel("$Colatitude [deg]$")
plt.ylabel("$Signal [ \mu K_{RJ} ]$");
num = 1000
plt.figure(figsize=(7, 5))
plt.plot(tod.cache.reference("signal_0A")[:num], "-")
plt.xlabel("$Time [arb.]$")
plt.ylabel("$Signal [ \mu K_{RJ} ]$");
#can see the signal as the pixel goes over the galaxy, another view of the same data
###Output
_____no_output_____
###Markdown
Bin the output to a map
###Code
from numba import njit #just in time compiler for python can use this sometimes to avoid writting C++
@njit #causes numba to compile this function so that it runs faster
def just_make_me_a_map(output_map, signals):
"""Temperature only binner
Bins a list of (pix, signal) tuples into an output map,
it does not support polarization, so it just averages it out.
Parameters
----------
output_map : np.array
already zeroed output map
signals : numba.typed.List of (np.array[int64] pix, np.array[np.double] signal)
Returns
-------
hits : np.array[np.int64]
hitmap
"""
hits = np.zeros(len(output_map), dtype=np.int64)
for pix, signal in signals:
for p,s in zip(pix, signal):
output_map[p] += s
hits[p] += 1
output_map[hits != 0] /= hits[hits != 0]
return hits
from numba.typed import List
signals = List()
for obs in data.obs:
for ch in focal_plane:
signals.append((obs["tod"].cache.reference("pixels_%s" % ch), obs["tod"].cache.reference("signal_%s" % ch)))
output_map = np.zeros(npix, dtype=np.double)
h = just_make_me_a_map(output_map, signals)
hp.mollview(h, title="hitmap", nest=True)
hp.mollview(output_map, nest=True, min=0, max=1e-3, cmap="coolwarm") #making a map from our focal plane with 2 deg beams
hp.gnomview(output_map, rot=(0,0), xsize=5000, ysize=2000, cmap="coolwarm", nest=True, min=0, max=1e-2)
###Output
_____no_output_____
###Markdown
Simulated Sky Signal in time domainIn this lesson we will use the TOAST Operator `OpSimPySM` to create timestreams for an instrument given a sky model.
###Code
# Load common tools for all lessons
import sys
sys.path.insert(0, "..")
from lesson_tools import (
fake_focalplane
)
# Capture C++ output in the jupyter cells
%reload_ext wurlitzer
import toast
import healpy as hp
import numpy as np
env = toast.Environment.get()
env.set_log_level("DEBUG")
###Output
_____no_output_____
###Markdown
Scanning strategyBefore being able to scan a map into a timestream we need to define a scanning strategyand get pointing information for each channel.We use the same **satellite** scanning used in lesson 2 about scanning strategies,see the `02_Simulated_Scan_Strategies/simscan_satellite.ipynb` for more details.
###Code
focal_plane = fake_focalplane()
focal_plane.keys()
focal_plane["0A"]["fwhm_arcmin"]
# Scan parameters
alpha = 50.0 # precession opening angle, degrees
beta = 45.0 # spin opening angle, degrees
p_alpha = 25.0 # precession period, minutes
p_beta = 1.25 # spin period, minutes
samplerate = 0.5 # sample rate, Hz
hwprpm = 5.0 # HWP rotation in RPM
nside = 64 # Healpix NSIDE
# We will use one observation per day, with no gaps in between, and
# run for one year.
obs_samples = int(24 * 3600.0 * samplerate) - 1
nobs = 366
# Slew the precession axis so that it completes one circle
deg_per_day = 360.0 / nobs
from toast.todmap import TODSatellite, slew_precession_axis
detquat = {ch: focal_plane[ch]["quat"] for ch in focal_plane}
# Create distributed data
comm = toast.Comm()
data = toast.Data(comm)
# Append observations
for ob in range(nobs):
obsname = "{:03d}".format(ob)
obsfirst = ob * (obs_samples + 1)
obsstart = 24 * 3600.0
tod = TODSatellite(
comm.comm_group,
detquat,
obs_samples,
firstsamp=obsfirst,
firsttime=obsstart,
rate=samplerate,
spinperiod=p_beta,
spinangle=beta,
precperiod=p_alpha,
precangle=alpha,
coord="E",
hwprpm=hwprpm
)
qprec = np.empty(4 * tod.local_samples[1], dtype=np.float64).reshape((-1, 4))
slew_precession_axis(
qprec,
firstsamp=obsfirst,
samplerate=samplerate,
degday=deg_per_day,
)
tod.set_prec_axis(qprec=qprec)
obs = dict()
obs["tod"] = tod
data.obs.append(obs)
from toast.todmap import (
OpPointingHpix,
OpAccumDiag
)
from toast.map import (
DistPixels
)
# Make a simple pointing matrix
pointing = OpPointingHpix(nside=nside, nest=True, mode="IQU")
pointing.exec(data)
# Construct a distributed map to store the hit map
npix = 12 * nside**2
hits = DistPixels(
data,
nnz=1,
dtype=np.int64,
)
hits.data.fill(0)
# Accumulate the hit map locally
build_hits = OpAccumDiag(hits=hits)
build_hits.exec(data)
# Reduce the map across processes (a No-op in this case)
hits.allreduce()
%matplotlib inline
hp.mollview(hits.data.flatten(), nest=True)
###Output
_____no_output_____
###Markdown
Define PySM parameters and instrument bandpassesThen we define the sky model parameters, choosing the desired set of `PySM` models and then we specify the band center and the bandwidth for a top-hat bandpass.Currently top-hat bandpasses are the only type supported by the operator, in the future we will implement arbitrary bandpasses.Then bandpass parameters can be added directly to the `focal_plane` dictionary:
###Code
for ch in focal_plane:
focal_plane[ch]["bandcenter_ghz"] = 70
focal_plane[ch]["bandwidth_ghz"] = 10
focal_plane[ch]["fwhm"] = 60*2
pysm_sky_config = ["s1", "f1", "a1", "d1"]
###Output
_____no_output_____
###Markdown
Run the OpSimPySM operatorThe `OpSimPySM` operator: * Creates top-hat bandpasses arrays (frequency axis and weights) as expected by `PySM` * Loops by channel and for each: * Creates a `PySMSky` object just with 1 channel at a time * Executes `PySMSky` to evaluate the sky models and bandpass-integrate * Calls `PySM` to perform distributed smoothing with `libsharp` * Gathers the map on the first MPI process * Applies coordinate transformation if necessary (not currently implemented in `libsharp`) * Use the `DistMap` object to communicate to each process the part of the sky they observe * Calls `OpSimScan` to rescan the map to a timeline
###Code
from toast.todmap import OpSimPySM
OpSimPySM?
opsim_pysm = OpSimPySM(
data,
comm=None,
pysm_model=pysm_sky_config,
apply_beam=True,
debug=True,
focalplanes=[focal_plane],
)
opsim_pysm.exec(data)
###Output
_____no_output_____
###Markdown
Plot output timelines
###Code
%matplotlib inline
import matplotlib.pyplot as plt
tod = data.obs[0]['tod']
pix = tod.cache.reference("pixels_0A")
import toast.qarray as qa
theta, phi, pa = qa.to_angles(tod.read_pntg(detector="0A"))
pix
num = 10000
plt.figure(figsize=(7, 5))
plt.plot(np.degrees(theta[:num]), tod.cache.reference("signal_0A")[:num], ".")
plt.xlabel("$Colatitude [deg]$")
plt.ylabel("$Signal [ \mu K_{RJ} ]$");
###Output
_____no_output_____
###Markdown
Bin the output to a map
###Code
from numba import njit
@njit
def just_make_me_a_map(output_map, signals):
"""Temperature only binner
Bins a list of (pix, signal) tuples into an output map,
it does not support polarization, so it just averages it out.
Parameters
----------
output_map : np.array
already zeroed output map
signals : numba.typed.List of (np.array[int64] pix, np.array[np.double] signal)
Returns
-------
hits : np.array[np.int64]
hitmap
"""
hits = np.zeros(len(output_map), dtype=np.int64)
for pix, signal in signals:
for p,s in zip(pix, signal):
output_map[p] += s
hits[p] += 1
output_map[hits != 0] /= hits[hits != 0]
return hits
from numba.typed import List
signals = List()
for obs in data.obs:
for ch in focal_plane:
signals.append((obs["tod"].cache.reference("pixels_%s" % ch), obs["tod"].cache.reference("signal_%s" % ch)))
output_map = np.zeros(npix, dtype=np.double)
h = just_make_me_a_map(output_map, signals)
hp.mollview(h, title="hitmap", nest=True)
hp.mollview(output_map, nest=True, min=0, max=1e-3, cmap="coolwarm")
hp.gnomview(output_map, rot=(0,0), xsize=5000, ysize=2000, cmap="coolwarm", nest=True, min=0, max=1e-2)
###Output
_____no_output_____ |
notebooks/Marriage-LR.ipynb | ###Markdown
Marriageability - Logistic Regression
###Code
#Importing Python packages
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
import pickle
import yellowbrick as yb
from sklearn import metrics
import os
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
warnings.filterwarnings("ignore")
#Importing CLassifier Packages for Scikitlearn
from sklearn.metrics import f1_score
from sklearn.pipeline import Pipeline
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegressionCV, LogisticRegression, SGDClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import (RandomForestClassifier, BaggingClassifier, RandomTreesEmbedding,GradientBoostingClassifier)
from sklearn.model_selection import train_test_split
from yellowbrick.classifier import ClassificationReport
###Output
_____no_output_____
###Markdown
Loading the data
###Code
#Load the data
ACSproject = pd.read_csv('data/ACSproject.csv', sep=',', header=0, skipinitialspace=True)
ACSproject.head()
###Output
_____no_output_____
###Markdown
Creating Location Features The GEOID yields(n = 2,378) distint locations in the dataframe. Not enough computing power to run the algorithms. Instead, the project utilizes ST_T (state) in place of location and created Tri_State (0/1) to denote states with economnic and geographic ties.
###Code
#Create Tri-state indicator Ex: MD+DC+VA
ACSproject['Tri_State'] = 0
ACSproject.loc[(ACSproject.ST_T ==9) | (ACSproject.ST_T ==10)| (ACSproject.ST_T ==11)|(ACSproject.ST_T ==17)|(ACSproject.ST_T ==18)|(ACSproject.ST_T ==21)|(ACSproject.ST_T ==24)|(ACSproject.ST_T ==34)|(ACSproject.ST_T ==36)|(ACSproject.ST_T ==39)|(ACSproject.ST_T ==42)|(ACSproject.ST_T ==51)|(ACSproject.ST_T ==54), 'Tri_State'] = 1
ACSproject['Tri_State'].value_counts(normalize=False)
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
ACSproject.dtypes
#Casting all columns as integers
ACSproject.astype('int64').dtypes
#One-hot encoding states
ACSproject = pd.get_dummies(ACSproject, columns=['ST'], prefix = 'ST_', drop_first=False)
ACSproject.head()
###Output
_____no_output_____
###Markdown
Modeling
###Code
# Labeling our X and y data
X = ACSproject[['CITIZEN','MOVER','EDUCATION','WORK_SOC','SEX_T','DIS_T','HISPANIC','WHITE','BLACK','INDIAN','ASIAN','OTHER', 'AGE_BIN', 'INCOME_BIN',
'OCC_BUS', 'OCC_CMM','OCC_CMS','OCC_CON', 'OCC_EAT', 'OCC_EDU','OCC_ENG', 'OCC_ENT', 'OCC_EXT', 'OCC_FFF',
'OCC_FIN', 'OCC_HLS', 'OCC_LGL', 'OCC_MED', 'OCC_MGR', 'OCC_MIL', 'OCC_OFF', 'OCC_PRD', 'OCC_PRS', 'OCC_PRT',
'OCC_RPR', 'OCC_SAL', 'OCC_SCI', 'OCC_TRN', 'OCC_UNE','FAMILY','ENGLISH','Tri_State',
'ST__1','ST__2','ST__4','ST__5','ST__6','ST__8','ST__9','ST__10','ST__11','ST__12','ST__13',
'ST__15','ST__16','ST__17','ST__18','ST__19','ST__20','ST__21','ST__22','ST__23','ST__24',
'ST__25','ST__26','ST__27','ST__28','ST__29','ST__30','ST__31','ST__32','ST__33','ST__34',
'ST__35','ST__36','ST__37','ST__38','ST__39','ST__40','ST__41','ST__42','ST__44','ST__45',
'ST__46','ST__47','ST__48','ST__49','ST__50','ST__51','ST__53','ST__54','ST__55','ST__56',
]].values
y = ACSproject['MARRIED'].values
#Specify the class of the target
classes = ['Not Married', 'Married']
#Splitting train and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state = 42)
#LogisticRegression
LR = LogisticRegression()
#Train the algorithm
LR.fit(X_train, y_train)
# predict the response
pred = LR.predict(X_test)
# evaluate accuracy
print ("Logistic Regression f1 score : ",f1_score(y_test, pred))
# Visualize LR
visualizer = ClassificationReport(LR, classes=classes, support=True, size=(500, 300))
visualizer.fit(X_train, y_train) # Fit the visualizer and the model
visualizer.score(X_test, y_test) # Evaluate the model on the test data
visualizer.poof() # Draw/show/poof the data
###Output
_____no_output_____
###Markdown
Cross Validation
###Code
# #Logistic Regression CV
# from sklearn.model_selection import StratifiedKFold, cross_val_score
# from sklearn.model_selection import train_test_split
# kfold = StratifiedKFold(n_splits=6,shuffle=True,random_state=0)
# scores = cross_val_score(LR, X, y, cv=kfold)
# print('Cross-Validation Scores: {}'.format(scores))
# print('Average Shuffled Cross-Validation Score: {}'.format(scores.mean()))
###Output
_____no_output_____
###Markdown
Grid Search
###Code
# # This model doesn't work any more!
# #Logistic Regression
# from sklearn.model_selection import GridSearchCV
# from sklearn.linear_model import LogisticRegression
# grid={"C":np.logspace(-3,3,7), "penalty":["l1","l2"]}# l1 lasso l2 ridge
# logreg=LogisticRegression()
# logreg_cv=GridSearchCV(logreg,grid,cv=3)
# logreg_cv.fit(X_train,y_train)
# print("tuned hpyerparameters :(best parameters) ",logreg_cv.best_params_)
# print("accuracy :",logreg_cv.best_score_)
###Output
_____no_output_____
###Markdown
Tuning Model
###Code
#LogisticRegression
LR = LogisticRegression(penalty = 'l2', C=0.0001)
#Train the algorithm
LR.fit(X_train, y_train)
#Saving the model
filename = 'finalized_LR_model.sav'
pickle.dump(LR, open(filename, 'wb'))
# predict the response
pred = LR.predict(X_test)
# evaluate accuracy
print ("Logistic Regression f1 score : ",f1_score(y_test, pred))
# Visualize LR
visualizer = ClassificationReport(LR, classes=classes, support=True, size=(500, 300))
visualizer.fit(X_train, y_train) # Fit the visualizer and the model
visualizer.score(X_test, y_test) # Evaluate the model on the test data
# visualizer.poof() # Draw/show/poof the data
def visualize_results(cm,score):
plt.figure(figsize=(9,9))
sns.heatmap(cm, annot=True, fmt=".3f", linewidths=.5, square = True, cmap = 'gray_r');
plt.ylabel('Actual label');
plt.xlabel('Predicted label');
all_sample_title = 'Accuracy Score: {0}'.format(score)
plt.title(all_sample_title, size = 15);
def fit_and_evaluate(X, y, model): #, args):
#model = model(**args)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
model.fit(X_train, y_train)
predictions = model.predict(X_test)
#avg_score = cross_val_score(model, X_test, y_test, cv=5).mean()
avg_score = 0.6772239180472
cm = metrics.confusion_matrix(y_test, predictions)
visualize_results(cm, avg_score)
#To do: Fill in the best performing parameters below and call the fit_and_evaluate function to fit and score our model
#best_parameters= {'C':'0.0001', penalty:'l2'}
fit_and_evaluate(X_train, y_train, LR)
# load the model from disk
loaded_model = pickle.load(open(filename, 'rb'))
result = loaded_model.score(X_test, y_test)
print(result)
#End of code.
###Output
_____no_output_____ |
docs/Math_Introduction.ipynb | ###Markdown
Introduction to `Φ.math`[](https://colab.research.google.com/github/tum-pbs/PhiFlow/blob/develop/docs/Math_Introduction.ipynb)
###Code
# !pip install --quiet phiflow
from phi import math
from phi.math import spatial, channel, batch, instance, tensor, wrap
###Output
_____no_output_____
###Markdown
Shapes and Dimension Types
###Code
spatial(x=4, y=3)
x = math.zeros(spatial(x=4, y=3), channel(vector=2))
###Output
_____no_output_____ |
0003 Algorithm Selection/03. 1 Hyperparameter Optimization.ipynb | ###Markdown
Aalto
###Code
X= np.concatenate([X_train, X_test])
test_fold = [-1 for _ in range(X_train.shape[0])] + [0 for _ in range(X_test.shape[0])]
y = np.concatenate([y_train, y_test])
ps = PredefinedSplit(test_fold)
def run_random_search(model, params, x_train, y_train):
#grid = GridSearchCV(model, params, cv = ps, n_jobs = -1, scoring = score, verbose = 0, refit = False)
grid =RandomizedSearchCV(model, param_grid, cv=ps,scoring = 'f1_macro')
grid.fit(x_train, y_train)
return (grid.best_params_, round(grid.best_score_,8),grid.best_estimator_)
###Output
_____no_output_____
###Markdown
RandomizedSearchCV DT
###Code
print ('%-90s %-20s %-8s %-8s' % ("HYPERPARAMETERS","F1 Score", "Time", "No"))
nfolds=10
param_grid = { 'criterion':['gini','entropy'],
"max_depth":np.linspace(1, 32, 32, endpoint=True),
"min_samples_split": sp_randint(2,10),#uniform(0.1,1 ),
# "min_samples_leafs" : np.linspace(0.1, 0.5, 5, endpoint=True)
"max_features" : sp_randint(1,X_train.shape[1])}
second=time()
f1=[]
clf=DecisionTreeClassifier()
for ii in range(25):
clf.fit(X_train, y_train)
predict =clf.predict(X_test)
f1.append(sklearn.metrics.f1_score(y_test, predict,average= "macro") )
f1=sum(f1)/len(f1)
#if f1>0.76:
print('%-90s %-20s %-8s %-8s' % ("default",f1,round(time()-second,3),ii))
for i in range(100):
second=time()
a,b,clf=run_random_search(DecisionTreeClassifier(),param_grid,X,y)
f1=[]
for ii in range(25):
clf.fit(X_train, y_train)
predict =clf.predict(X_test)
f1.append(sklearn.metrics.f1_score(y_test, predict,average= "macro") )
f1=sum(f1)/len(f1)
#if f1>0.76:
print('%-90s %-20s %-8s %-8s' % (a,f1,round(time()-second,3),i))
###Output
HYPERPARAMETERS F1 Score Time No
default 0.7216903208093847 10.534 24
{'criterion': 'gini', 'max_depth': 24.0, 'max_features': 29, 'min_samples_split': 2} 0.7270150996774428 14.381 0
{'criterion': 'entropy', 'max_depth': 26.0, 'max_features': 29, 'min_samples_split': 9} 0.724980440036087 15.236 1
{'criterion': 'gini', 'max_depth': 26.0, 'max_features': 13, 'min_samples_split': 2} 0.7252618968864211 9.218 2
{'criterion': 'gini', 'max_depth': 30.0, 'max_features': 16, 'min_samples_split': 9} 0.7245593084832818 10.306 3
{'criterion': 'gini', 'max_depth': 30.0, 'max_features': 27, 'min_samples_split': 6} 0.7251107950372347 13.949 4
{'criterion': 'entropy', 'max_depth': 26.0, 'max_features': 20, 'min_samples_split': 9} 0.7244973995041137 13.091 5
{'criterion': 'entropy', 'max_depth': 28.0, 'max_features': 16, 'min_samples_split': 2} 0.7231765310218701 12.385 6
{'criterion': 'gini', 'max_depth': 30.0, 'max_features': 21, 'min_samples_split': 9} 0.7247943363351329 12.858 7
{'criterion': 'gini', 'max_depth': 27.0, 'max_features': 18, 'min_samples_split': 9} 0.7248002515917541 12.961 8
{'criterion': 'gini', 'max_depth': 32.0, 'max_features': 8, 'min_samples_split': 5} 0.723367681399762 9.64 9
{'criterion': 'gini', 'max_depth': 22.0, 'max_features': 27, 'min_samples_split': 8} 0.7244262045209183 14.53 10
{'criterion': 'entropy', 'max_depth': 19.0, 'max_features': 19, 'min_samples_split': 2} 0.7236455988378891 13.168 11
{'criterion': 'entropy', 'max_depth': 26.0, 'max_features': 20, 'min_samples_split': 8} 0.723866087060152 14.352 12
{'criterion': 'entropy', 'max_depth': 26.0, 'max_features': 8, 'min_samples_split': 8} 0.7237479956150793 11.079 13
{'criterion': 'entropy', 'max_depth': 26.0, 'max_features': 7, 'min_samples_split': 9} 0.7241165685092735 9.799 14
{'criterion': 'gini', 'max_depth': 26.0, 'max_features': 20, 'min_samples_split': 2} 0.7257924263102572 12.802 15
{'criterion': 'entropy', 'max_depth': 31.0, 'max_features': 15, 'min_samples_split': 8} 0.7232163736578001 12.453 16
{'criterion': 'gini', 'max_depth': 21.0, 'max_features': 18, 'min_samples_split': 5} 0.7231862602417674 12.192 17
{'criterion': 'entropy', 'max_depth': 23.0, 'max_features': 12, 'min_samples_split': 8} 0.7242180589439537 12.623 18
{'criterion': 'entropy', 'max_depth': 25.0, 'max_features': 8, 'min_samples_split': 9} 0.7237966278378923 10.075 19
{'criterion': 'gini', 'max_depth': 28.0, 'max_features': 13, 'min_samples_split': 6} 0.7253347945229431 10.037 20
{'criterion': 'gini', 'max_depth': 31.0, 'max_features': 26, 'min_samples_split': 6} 0.725302991802814 12.981 21
{'criterion': 'gini', 'max_depth': 26.0, 'max_features': 17, 'min_samples_split': 3} 0.7248940115045128 10.636 22
{'criterion': 'entropy', 'max_depth': 17.0, 'max_features': 20, 'min_samples_split': 4} 0.7209539428720394 11.647 23
{'criterion': 'entropy', 'max_depth': 19.0, 'max_features': 9, 'min_samples_split': 8} 0.722450295095256 8.9 24
{'criterion': 'gini', 'max_depth': 27.0, 'max_features': 18, 'min_samples_split': 3} 0.7250459496902333 10.466 25
{'criterion': 'gini', 'max_depth': 21.0, 'max_features': 6, 'min_samples_split': 5} 0.724028654176219 7.67 26
{'criterion': 'gini', 'max_depth': 29.0, 'max_features': 14, 'min_samples_split': 6} 0.725094307029027 9.462 27
{'criterion': 'entropy', 'max_depth': 20.0, 'max_features': 26, 'min_samples_split': 9} 0.7251236464849294 13.486 28
{'criterion': 'gini', 'max_depth': 29.0, 'max_features': 12, 'min_samples_split': 6} 0.7246663845617409 9.562 29
{'criterion': 'gini', 'max_depth': 21.0, 'max_features': 7, 'min_samples_split': 4} 0.7228604700446687 8.017 30
{'criterion': 'entropy', 'max_depth': 21.0, 'max_features': 18, 'min_samples_split': 4} 0.7234831398596584 10.994 31
{'criterion': 'entropy', 'max_depth': 25.0, 'max_features': 22, 'min_samples_split': 8} 0.7242566965576555 12.191 32
{'criterion': 'entropy', 'max_depth': 22.0, 'max_features': 25, 'min_samples_split': 8} 0.724689598515635 14.339 33
{'criterion': 'entropy', 'max_depth': 19.0, 'max_features': 10, 'min_samples_split': 6} 0.7231456172138455 9.195 34
{'criterion': 'gini', 'max_depth': 19.0, 'max_features': 12, 'min_samples_split': 6} 0.7186509273016304 9.17 35
{'criterion': 'gini', 'max_depth': 30.0, 'max_features': 27, 'min_samples_split': 3} 0.7246350578939911 13.181 36
{'criterion': 'gini', 'max_depth': 23.0, 'max_features': 22, 'min_samples_split': 5} 0.7260497250645431 11.054 37
{'criterion': 'gini', 'max_depth': 21.0, 'max_features': 24, 'min_samples_split': 8} 0.7190099059589314 11.804 38
{'criterion': 'gini', 'max_depth': 32.0, 'max_features': 29, 'min_samples_split': 5} 0.7251969331434204 13.558 39
{'criterion': 'gini', 'max_depth': 22.0, 'max_features': 11, 'min_samples_split': 5} 0.7247805981039107 8.714 40
{'criterion': 'gini', 'max_depth': 24.0, 'max_features': 18, 'min_samples_split': 8} 0.7250182827569855 9.906 41
{'criterion': 'gini', 'max_depth': 30.0, 'max_features': 23, 'min_samples_split': 5} 0.7256816573850342 12.52 42
{'criterion': 'gini', 'max_depth': 26.0, 'max_features': 18, 'min_samples_split': 7} 0.7244911364951246 9.992 43
{'criterion': 'gini', 'max_depth': 24.0, 'max_features': 29, 'min_samples_split': 5} 0.7271805851132336 14.186 44
{'criterion': 'gini', 'max_depth': 25.0, 'max_features': 29, 'min_samples_split': 8} 0.725989247364035 13.222 45
{'criterion': 'entropy', 'max_depth': 18.0, 'max_features': 16, 'min_samples_split': 8} 0.7235095351931019 10.251 46
{'criterion': 'gini', 'max_depth': 25.0, 'max_features': 26, 'min_samples_split': 2} 0.7267338625528005 13.176 47
{'criterion': 'gini', 'max_depth': 25.0, 'max_features': 26, 'min_samples_split': 2} 0.7266859440773427 13.061 48
{'criterion': 'gini', 'max_depth': 26.0, 'max_features': 27, 'min_samples_split': 3} 0.7268408251957146 12.488 49
{'criterion': 'entropy', 'max_depth': 19.0, 'max_features': 8, 'min_samples_split': 5} 0.7221796067233227 7.971 50
{'criterion': 'gini', 'max_depth': 30.0, 'max_features': 20, 'min_samples_split': 6} 0.7250337431216138 10.596 51
{'criterion': 'gini', 'max_depth': 21.0, 'max_features': 23, 'min_samples_split': 2} 0.7211636749895688 11.646 52
{'criterion': 'gini', 'max_depth': 24.0, 'max_features': 3, 'min_samples_split': 2} 0.7234949727863585 6.745 53
{'criterion': 'gini', 'max_depth': 25.0, 'max_features': 22, 'min_samples_split': 6} 0.7271052189634853 11.83 54
{'criterion': 'gini', 'max_depth': 21.0, 'max_features': 14, 'min_samples_split': 7} 0.7237087117579789 8.427 55
{'criterion': 'entropy', 'max_depth': 23.0, 'max_features': 12, 'min_samples_split': 8} 0.724436783952094 9.811 56
{'criterion': 'gini', 'max_depth': 32.0, 'max_features': 29, 'min_samples_split': 9} 0.7252723672583222 14.184 57
{'criterion': 'gini', 'max_depth': 24.0, 'max_features': 24, 'min_samples_split': 4} 0.725737715800568 11.45 58
{'criterion': 'gini', 'max_depth': 30.0, 'max_features': 1, 'min_samples_split': 9} 0.7234410867960476 6.701 59
{'criterion': 'gini', 'max_depth': 24.0, 'max_features': 10, 'min_samples_split': 2} 0.7251610309852468 9.171 60
{'criterion': 'entropy', 'max_depth': 25.0, 'max_features': 17, 'min_samples_split': 9} 0.7245209535453699 10.934 61
###Markdown
GridSearchCV DT
###Code
param_grid = { 'criterion':['gini','entropy'],
"max_depth":list(range(1,32)),
"min_samples_split":list(range(2,10)),#uniform(0.1,1 ),
# "min_samples_leafs" : np.linspace(0.1, 0.5, 5, endpoint=True)
"max_features" :list(range(1,X_train.shape[1]))}
nbModel_grid = GridSearchCV(estimator=DecisionTreeClassifier(), param_grid=param_grid, verbose=1, cv=ps, n_jobs=-1)
nbModel_grid.fit(X, y)
print(nbModel_grid.best_estimator_)
###Output
Fitting 1 folds for each of 14384 candidates, totalling 14384 fits
###Markdown
RandomizedSearchCV RF
###Code
# use a full grid over all parameters
param_grid = {"max_depth":np.linspace(1, 32, 32, endpoint=True),
"n_estimators" : sp_randint(1, 200),
"max_features": sp_randint(1, 11),
"min_samples_split":sp_randint(2, 11),
"bootstrap": [True, False],
"criterion": ["gini", "entropy"]}
second=time()
f1=[]
clf=RandomForestClassifier()
for ii in range(1):
clf.fit(X_train, y_train)
predict =clf.predict(X_test)
f1.append(sklearn.metrics.f1_score(y_test, predict,average= "macro") )
f1=sum(f1)/len(f1)
#if f1>0.76:
print('%-90s %-20s %-8s %-8s' % ("default",f1,round(time()-second,3),ii))
for i in range(50):
second=time()
a,b,clf=run_random_search(RandomForestClassifier(),param_grid,X,y)
clf.fit(X_train, y_train)
predict =clf.predict(X_test)
f1=sklearn.metrics.f1_score(y_test, predict,average= "macro")
print('%-90s %-20s %-8s %-8s' % (a,f1,round(time()-second,3),i))
###Output
default 0.7227299454671903 10.844 0
{'bootstrap': True, 'criterion': 'entropy', 'max_depth': 22.0, 'max_features': 2, 'min_samples_split': 10, 'n_estimators': 156} 0.7289502435293613 159.548 0
{'bootstrap': False, 'criterion': 'gini', 'max_depth': 25.0, 'max_features': 1, 'min_samples_split': 8, 'n_estimators': 113} 0.7262619535233411 193.489 1
{'bootstrap': True, 'criterion': 'entropy', 'max_depth': 22.0, 'max_features': 10, 'min_samples_split': 10, 'n_estimators': 148} 0.7236580905885828 183.427 2
{'bootstrap': True, 'criterion': 'entropy', 'max_depth': 27.0, 'max_features': 4, 'min_samples_split': 10, 'n_estimators': 82} 0.727171992889616 113.511 3
{'bootstrap': True, 'criterion': 'entropy', 'max_depth': 29.0, 'max_features': 3, 'min_samples_split': 7, 'n_estimators': 90} 0.7246079997419235 147.829 4
{'bootstrap': True, 'criterion': 'gini', 'max_depth': 17.0, 'max_features': 3, 'min_samples_split': 7, 'n_estimators': 179} 0.7261607773967084 141.951 5
{'bootstrap': False, 'criterion': 'entropy', 'max_depth': 24.0, 'max_features': 8, 'min_samples_split': 9, 'n_estimators': 71} 0.7274836873217096 159.158 6
{'bootstrap': False, 'criterion': 'entropy', 'max_depth': 27.0, 'max_features': 2, 'min_samples_split': 10, 'n_estimators': 44} 0.7272760895770615 125.654 7
{'bootstrap': True, 'criterion': 'gini', 'max_depth': 31.0, 'max_features': 5, 'min_samples_split': 7, 'n_estimators': 57} 0.7223700356865719 110.93 8
{'bootstrap': True, 'criterion': 'entropy', 'max_depth': 27.0, 'max_features': 6, 'min_samples_split': 7, 'n_estimators': 170} 0.726462732206969 197.449 9
{'bootstrap': True, 'criterion': 'gini', 'max_depth': 15.0, 'max_features': 7, 'min_samples_split': 8, 'n_estimators': 65} 0.7271774612920666 103.825 10
{'bootstrap': True, 'criterion': 'gini', 'max_depth': 15.0, 'max_features': 3, 'min_samples_split': 9, 'n_estimators': 192} 0.7259226691785232 118.783 11
{'bootstrap': True, 'criterion': 'entropy', 'max_depth': 20.0, 'max_features': 5, 'min_samples_split': 10, 'n_estimators': 101} 0.7264429266577636 139.107 12
{'bootstrap': False, 'criterion': 'gini', 'max_depth': 29.0, 'max_features': 1, 'min_samples_split': 9, 'n_estimators': 168} 0.7273870266442394 159.972 13
{'bootstrap': True, 'criterion': 'gini', 'max_depth': 20.0, 'max_features': 4, 'min_samples_split': 2, 'n_estimators': 42} 0.721795284493693 192.037 14
{'bootstrap': False, 'criterion': 'entropy', 'max_depth': 15.0, 'max_features': 7, 'min_samples_split': 2, 'n_estimators': 179} 0.725948017769752 226.695 15
{'bootstrap': False, 'criterion': 'entropy', 'max_depth': 20.0, 'max_features': 9, 'min_samples_split': 10, 'n_estimators': 88} 0.7268684877036256 177.318 16
{'bootstrap': False, 'criterion': 'gini', 'max_depth': 16.0, 'max_features': 5, 'min_samples_split': 3, 'n_estimators': 40} 0.7275582826119787 158.053 17
{'bootstrap': True, 'criterion': 'gini', 'max_depth': 29.0, 'max_features': 10, 'min_samples_split': 4, 'n_estimators': 131} 0.7261681352406946 150.095 18
{'bootstrap': False, 'criterion': 'gini', 'max_depth': 16.0, 'max_features': 10, 'min_samples_split': 8, 'n_estimators': 51} 0.7254586605586391 106.204 19
{'bootstrap': True, 'criterion': 'entropy', 'max_depth': 15.0, 'max_features': 6, 'min_samples_split': 7, 'n_estimators': 97} 0.7260312420099452 84.556 20
{'bootstrap': True, 'criterion': 'entropy', 'max_depth': 18.0, 'max_features': 7, 'min_samples_split': 10, 'n_estimators': 72} 0.7281730893615443 94.277 21
{'bootstrap': False, 'criterion': 'entropy', 'max_depth': 30.0, 'max_features': 6, 'min_samples_split': 10, 'n_estimators': 87} 0.7271905907490757 128.626 22
{'bootstrap': False, 'criterion': 'gini', 'max_depth': 28.0, 'max_features': 7, 'min_samples_split': 9, 'n_estimators': 48} 0.7270876642348131 105.576 23
{'bootstrap': False, 'criterion': 'entropy', 'max_depth': 16.0, 'max_features': 3, 'min_samples_split': 3, 'n_estimators': 75} 0.7256443228361186 66.597 24
{'bootstrap': True, 'criterion': 'entropy', 'max_depth': 16.0, 'max_features': 10, 'min_samples_split': 7, 'n_estimators': 170} 0.7288270925057012 134.969 25
{'bootstrap': True, 'criterion': 'gini', 'max_depth': 29.0, 'max_features': 3, 'min_samples_split': 5, 'n_estimators': 150} 0.7244428726674976 116.494 26
{'bootstrap': False, 'criterion': 'gini', 'max_depth': 18.0, 'max_features': 8, 'min_samples_split': 8, 'n_estimators': 173} 0.7262569086702089 163.22 27
{'bootstrap': True, 'criterion': 'entropy', 'max_depth': 24.0, 'max_features': 10, 'min_samples_split': 7, 'n_estimators': 58} 0.7270239359734808 99.236 28
{'bootstrap': False, 'criterion': 'entropy', 'max_depth': 17.0, 'max_features': 5, 'min_samples_split': 6, 'n_estimators': 40} 0.7236789322288364 89.597 29
{'bootstrap': True, 'criterion': 'gini', 'max_depth': 27.0, 'max_features': 5, 'min_samples_split': 5, 'n_estimators': 78} 0.7226860148579904 67.1 30
{'bootstrap': True, 'criterion': 'gini', 'max_depth': 22.0, 'max_features': 2, 'min_samples_split': 8, 'n_estimators': 158} 0.7251134866724035 107.267 31
{'bootstrap': True, 'criterion': 'entropy', 'max_depth': 32.0, 'max_features': 9, 'min_samples_split': 8, 'n_estimators': 137} 0.7282017198879392 154.268 32
{'bootstrap': True, 'criterion': 'entropy', 'max_depth': 29.0, 'max_features': 8, 'min_samples_split': 9, 'n_estimators': 72} 0.7239421297620993 100.141 33
{'bootstrap': True, 'criterion': 'entropy', 'max_depth': 21.0, 'max_features': 7, 'min_samples_split': 8, 'n_estimators': 114} 0.7272023152708532 107.083 34
{'bootstrap': True, 'criterion': 'gini', 'max_depth': 21.0, 'max_features': 2, 'min_samples_split': 3, 'n_estimators': 153} 0.7243882279289854 89.127 35
{'bootstrap': False, 'criterion': 'entropy', 'max_depth': 22.0, 'max_features': 9, 'min_samples_split': 9, 'n_estimators': 69} 0.7268893812726851 96.224 36
{'bootstrap': False, 'criterion': 'gini', 'max_depth': 25.0, 'max_features': 3, 'min_samples_split': 7, 'n_estimators': 60} 0.7270006719672821 79.197 37
{'bootstrap': False, 'criterion': 'entropy', 'max_depth': 15.0, 'max_features': 8, 'min_samples_split': 10, 'n_estimators': 150} 0.7269748732859105 140.889 38
{'bootstrap': True, 'criterion': 'gini', 'max_depth': 18.0, 'max_features': 8, 'min_samples_split': 9, 'n_estimators': 96} 0.729092339928399 82.865 39
{'bootstrap': True, 'criterion': 'entropy', 'max_depth': 15.0, 'max_features': 10, 'min_samples_split': 4, 'n_estimators': 134} 0.7279725366435981 99.794 40
{'bootstrap': False, 'criterion': 'gini', 'max_depth': 19.0, 'max_features': 7, 'min_samples_split': 4, 'n_estimators': 27} 0.726429423617312 89.002 41
{'bootstrap': False, 'criterion': 'gini', 'max_depth': 25.0, 'max_features': 1, 'min_samples_split': 10, 'n_estimators': 24} 0.7278031608325315 121.184 42
{'bootstrap': False, 'criterion': 'gini', 'max_depth': 17.0, 'max_features': 7, 'min_samples_split': 9, 'n_estimators': 60} 0.7268951830128271 126.641 43
{'bootstrap': False, 'criterion': 'entropy', 'max_depth': 22.0, 'max_features': 4, 'min_samples_split': 9, 'n_estimators': 154} 0.72544300622277 137.125 44
{'bootstrap': True, 'criterion': 'entropy', 'max_depth': 18.0, 'max_features': 2, 'min_samples_split': 8, 'n_estimators': 123} 0.7267975800525236 103.93 45
{'bootstrap': True, 'criterion': 'entropy', 'max_depth': 29.0, 'max_features': 8, 'min_samples_split': 10, 'n_estimators': 52} 0.725383068130239 98.231 46
{'bootstrap': True, 'criterion': 'gini', 'max_depth': 21.0, 'max_features': 8, 'min_samples_split': 10, 'n_estimators': 139} 0.727666811668462 115.969 47
{'bootstrap': False, 'criterion': 'gini', 'max_depth': 20.0, 'max_features': 7, 'min_samples_split': 9, 'n_estimators': 93} 0.7276667091992742 113.684 48
###Markdown
RandomizedSearchCV KNeighborsClassifier
###Code
# use a full grid over all parameters
param_grid = {"n_neighbors" : sp_randint(1,64) ,
"leaf_size": sp_randint(1,50) ,
"algorithm" : ["auto", "ball_tree", "kd_tree", "brute"],
"weights" : ["uniform", "distance"]}
second=time()
f1=[]
clf=KNeighborsClassifier()
for ii in range(1):
clf.fit(X_train, y_train)
predict =clf.predict(X_test)
f1.append(sklearn.metrics.f1_score(y_test, predict,average= "macro") )
f1=sum(f1)/len(f1)
#if f1>0.76:
print('%-90s %-20s %-8s %-8s' % ("default",f1,round(time()-second,3),i))
for i in range(50):
second=time()
a,b,clf=run_random_search(KNeighborsClassifier(),param_grid,X,y)
clf.fit(X_train, y_train)
predict =clf.predict(X_test)
f1=sklearn.metrics.f1_score(y_test, predict,average= "macro")
print('%-90s %-20s %-8s %-8s' % (a,f1,round(time()-second,3),i))
###Output
default 0.7097573078498426 31.042 49
{'algorithm': 'auto', 'leaf_size': 30, 'n_neighbors': 63, 'weights': 'distance'} 0.7169344588030341 211.957 0
{'algorithm': 'auto', 'leaf_size': 16, 'n_neighbors': 47, 'weights': 'distance'} 0.7176466921242463 245.564 1
{'algorithm': 'brute', 'leaf_size': 18, 'n_neighbors': 44, 'weights': 'distance'} 0.7169670826387976 273.454 2
{'algorithm': 'brute', 'leaf_size': 2, 'n_neighbors': 48, 'weights': 'distance'} 0.718017143478749 227.629 3
{'algorithm': 'auto', 'leaf_size': 43, 'n_neighbors': 46, 'weights': 'distance'} 0.7176244604190383 228.35 4
{'algorithm': 'auto', 'leaf_size': 18, 'n_neighbors': 40, 'weights': 'distance'} 0.7175984782540666 257.862 5
{'algorithm': 'brute', 'leaf_size': 20, 'n_neighbors': 61, 'weights': 'distance'} 0.7170127067216242 234.887 6
{'algorithm': 'kd_tree', 'leaf_size': 1, 'n_neighbors': 46, 'weights': 'distance'} 0.7104694436974722 202.743 7
{'algorithm': 'brute', 'leaf_size': 35, 'n_neighbors': 7, 'weights': 'uniform'} 0.7081961280125751 209.964 8
{'algorithm': 'brute', 'leaf_size': 23, 'n_neighbors': 52, 'weights': 'distance'} 0.7173685344914941 212.04 9
{'algorithm': 'auto', 'leaf_size': 37, 'n_neighbors': 54, 'weights': 'distance'} 0.7177129935879529 224.004 10
{'algorithm': 'brute', 'leaf_size': 4, 'n_neighbors': 62, 'weights': 'distance'} 0.7170097076382455 207.909 11
{'algorithm': 'brute', 'leaf_size': 8, 'n_neighbors': 63, 'weights': 'distance'} 0.7169344588030341 271.655 12
{'algorithm': 'brute', 'leaf_size': 17, 'n_neighbors': 43, 'weights': 'distance'} 0.7178903666497938 228.07 13
{'algorithm': 'brute', 'leaf_size': 14, 'n_neighbors': 42, 'weights': 'distance'} 0.7176872387392771 208.902 14
{'algorithm': 'kd_tree', 'leaf_size': 4, 'n_neighbors': 44, 'weights': 'distance'} 0.7102768121197465 252.865 15
{'algorithm': 'brute', 'leaf_size': 45, 'n_neighbors': 23, 'weights': 'distance'} 0.7114519450706117 230.086 16
{'algorithm': 'auto', 'leaf_size': 7, 'n_neighbors': 16, 'weights': 'distance'} 0.7131093630276617 202.224 17
{'algorithm': 'auto', 'leaf_size': 13, 'n_neighbors': 48, 'weights': 'distance'} 0.718017143478749 288.878 18
{'algorithm': 'brute', 'leaf_size': 31, 'n_neighbors': 56, 'weights': 'distance'} 0.7176132491846418 167.248 19
{'algorithm': 'kd_tree', 'leaf_size': 35, 'n_neighbors': 56, 'weights': 'distance'} 0.7079073271193573 190.033 20
{'algorithm': 'brute', 'leaf_size': 7, 'n_neighbors': 44, 'weights': 'distance'} 0.7169670826387976 251.166 21
{'algorithm': 'auto', 'leaf_size': 15, 'n_neighbors': 54, 'weights': 'distance'} 0.7177129935879529 232.68 22
{'algorithm': 'brute', 'leaf_size': 13, 'n_neighbors': 51, 'weights': 'distance'} 0.7173184534887421 230.976 23
{'algorithm': 'auto', 'leaf_size': 3, 'n_neighbors': 38, 'weights': 'distance'} 0.7167155412508265 246.683 24
{'algorithm': 'brute', 'leaf_size': 11, 'n_neighbors': 32, 'weights': 'distance'} 0.7102365799223443 229.454 25
{'algorithm': 'brute', 'leaf_size': 24, 'n_neighbors': 50, 'weights': 'distance'} 0.717631726693687 206.894 26
{'algorithm': 'ball_tree', 'leaf_size': 49, 'n_neighbors': 33, 'weights': 'distance'} 0.7103457394465497 166.123 27
{'algorithm': 'brute', 'leaf_size': 41, 'n_neighbors': 48, 'weights': 'distance'} 0.718017143478749 255.024 28
{'algorithm': 'kd_tree', 'leaf_size': 16, 'n_neighbors': 34, 'weights': 'distance'} 0.7111808408837775 191.456 29
{'algorithm': 'auto', 'leaf_size': 26, 'n_neighbors': 54, 'weights': 'distance'} 0.7177129935879529 291.692 30
{'algorithm': 'auto', 'leaf_size': 16, 'n_neighbors': 62, 'weights': 'distance'} 0.7170097076382455 211.092 31
{'algorithm': 'auto', 'leaf_size': 11, 'n_neighbors': 54, 'weights': 'distance'} 0.7177129935879529 198.097 32
{'algorithm': 'brute', 'leaf_size': 28, 'n_neighbors': 45, 'weights': 'distance'} 0.7174484819687488 206.553 33
{'algorithm': 'auto', 'leaf_size': 14, 'n_neighbors': 22, 'weights': 'distance'} 0.7113581356843974 170.384 34
{'algorithm': 'ball_tree', 'leaf_size': 16, 'n_neighbors': 42, 'weights': 'distance'} 0.7102057347512681 171.394 35
{'algorithm': 'brute', 'leaf_size': 25, 'n_neighbors': 44, 'weights': 'distance'} 0.7169670826387976 253.003 36
{'algorithm': 'brute', 'leaf_size': 38, 'n_neighbors': 45, 'weights': 'distance'} 0.7174484819687488 220.847 37
{'algorithm': 'brute', 'leaf_size': 24, 'n_neighbors': 38, 'weights': 'distance'} 0.7167155412508265 239.582 38
{'algorithm': 'brute', 'leaf_size': 32, 'n_neighbors': 61, 'weights': 'distance'} 0.7170127067216242 246.425 39
{'algorithm': 'ball_tree', 'leaf_size': 15, 'n_neighbors': 55, 'weights': 'distance'} 0.7066496694373934 200.188 40
{'algorithm': 'brute', 'leaf_size': 40, 'n_neighbors': 29, 'weights': 'distance'} 0.7102025057910922 254.251 41
{'algorithm': 'auto', 'leaf_size': 13, 'n_neighbors': 63, 'weights': 'distance'} 0.7169344588030341 253.98 42
{'algorithm': 'brute', 'leaf_size': 1, 'n_neighbors': 44, 'weights': 'distance'} 0.7169670826387976 194.761 43
{'algorithm': 'brute', 'leaf_size': 24, 'n_neighbors': 45, 'weights': 'distance'} 0.7174484819687488 249.6 44
{'algorithm': 'auto', 'leaf_size': 23, 'n_neighbors': 19, 'weights': 'distance'} 0.7141141781681122 221.987 45
{'algorithm': 'brute', 'leaf_size': 28, 'n_neighbors': 15, 'weights': 'distance'} 0.7131475504599976 231.722 46
{'algorithm': 'brute', 'leaf_size': 14, 'n_neighbors': 52, 'weights': 'distance'} 0.7173685344914941 205.543 47
{'algorithm': 'auto', 'leaf_size': 36, 'n_neighbors': 56, 'weights': 'distance'} 0.7176132491846418 218.377 48
{'algorithm': 'brute', 'leaf_size': 4, 'n_neighbors': 60, 'weights': 'distance'} 0.7169916986321065 228.11 49
###Markdown
RandomizedSearchCV GradientBoostingClassifier
###Code
# use a full grid over all parameters
param_grid = {"learning_rate": sp_randFloat(),
"subsample" : sp_randFloat(),
"n_estimators" : sp_randInt(100, 1000),
"max_depth" : sp_randInt(4, 10)
}
second=time()
f1=[]
clf=GradientBoostingClassifier()
for ii in range(1):
clf.fit(X_train, y_train)
predict =clf.predict(X_test)
f1.append(sklearn.metrics.f1_score(y_test, predict,average= "macro") )
f1=sum(f1)/len(f1)
#if f1>0.76:
print('%-90s %-20s %-8s %-8s' % ("default",f1,round(time()-second,3),ii))
for i in range(1):
second=time()
a,b,clf=run_random_search(GradientBoostingClassifier(),param_grid,X,y)
clf.fit(X_train, y_train)
predict =clf.predict(X_test)
f1=sklearn.metrics.f1_score(y_test, predict,average= "macro")
print('%-90s %-20s %-8s %-8s' % (a,f1,round(time()-second,3),i))
###Output
default 0.6247629147200783 323.341 0
{'learning_rate': 0.1838641631843394, 'max_depth': 6, 'n_estimators': 535, 'subsample': 0.7134682210818548} 0.010075373269035157 41527.706 0
###Markdown
SVM
###Code
param_grid = {'C': [0.001, 0.01, 0.1, 1, 10], 'gamma' : [0.001, 0.01, 0.1, 1]}
nbModel_grid = GridSearchCV(estimator=svm.SVC(), param_grid=param_grid, verbose=1, cv=ps, n_jobs=-1)
nbModel_grid.fit(X, y)
print(nbModel_grid.best_estimator_)
###Output
Fitting 1 folds for each of 20 candidates, totalling 20 fits
SVC(C=10, gamma=1)
###Markdown
RandomizedSearchCV SVM
###Code
param_grid = {'C': [0.001, 0.01, 0.1, 1, 10], 'gamma' : [0.001, 0.01, 0.1, 1]}
second=time()
a,b,clf=run_random_search(svm.SVC(),param_grid,X,y)
clf.fit(X_train, y_train)
predict =clf.predict(X_test)
f1=(sklearn.metrics.f1_score(y_test, predict,average= "macro") )
print('%-90s %-20s %-8s %-8s' % (a,f1,round(time()-second,3),b))
param_grid = {'C': [0.001, 0.01, 0.1, 1, 10], 'gamma' : [0.001, 0.01, 0.1, 1]}
for i in range(33):
second=time()
a,b,clf=run_random_search(svm.SVC(),param_grid,X,y)
f1=[]
for ii in range(10):
clf.fit(X_train, y_train)
predict =clf.predict(X_test)
f1.append(sklearn.metrics.f1_score(y_test, predict,average= "macro") )
f1=sum(f1)/len(f1)
#if f1>0.76:
print('%-90s %-20s %-8s %-8s' % (a,f1,round(time()-second,3),i))
param_grid = {"C": stats.uniform(0.001, 10),
"gamma": stats.uniform(0.001, 1)}
for i in range(33):
second=time()
a,b,clf=run_random_search(svm.SVC(),param_grid,X,y)
f1=[]
for ii in range(10):
clf.fit(X_train, y_train)
predict =clf.predict(X_test)
f1.append(sklearn.metrics.f1_score(y_test, predict,average= "macro") )
f1=sum(f1)/len(f1)
#if f1>0.76:
print('%-90s %-20s %-8s %-8s' % (a,f1,round(time()-second,3),i))
###Output
_____no_output_____
###Markdown
NB
###Code
from sklearn.naive_bayes import CategoricalNB
from sklearn.model_selection import GridSearchCV
param_grid_nb = {
'alpha': np.logspace(0,-9, num=100),
"fit_prior":["True","False"]
}
nbModel_grid = GridSearchCV(estimator=CategoricalNB(), param_grid=param_grid_nb, verbose=1, cv=ps, n_jobs=-1)
nbModel_grid.fit(X, y)
print(nbModel_grid.best_estimator_)
from sklearn.naive_bayes import CategoricalNB
from sklearn.model_selection import GridSearchCV
param_grid_nb = {
'alpha': np.logspace(0,-9, num=100),
"fit_prior":["True","False"]
}
nbModel_grid = GridSearchCV(estimator=CategoricalNB(), param_grid=param_grid_nb, verbose=1, cv=ps, n_jobs=-1)
nbModel_grid.fit(X, y)
print(nbModel_grid.best_estimator_)
###Output
Fitting 1 folds for each of 200 candidates, totalling 200 fits
###Markdown
RandomizedSearchCV NB
###Code
second=time()
param_grid = {
'alpha': np.logspace(0,-9, num=100),
"fit_prior":["True","False"]
}
a,b,clf=run_random_search(CategoricalNB(),param_grid,X,y)
clf.fit(X_train, y_train)
predict =clf.predict(X_test)
f1=(sklearn.metrics.f1_score(y_test, predict,average= "macro") )
print('%-90s %-20s %-8s %-8s' % (a,f1,round(time()-second,3),b))
from sklearn.naive_bayes import CategoricalNB
second=time()
param_grid = {
'alpha': np.logspace(0,-9, num=100),
"fit_prior":["True","False"]
}
for i in range(100):
second=time()
a,b,clf=run_random_search(CategoricalNB(),param_grid,X,y)
f1=[]
for ii in range(25):
clf.fit(X_train, y_train)
predict =clf.predict(X_test)
f1.append(sklearn.metrics.f1_score(y_test, predict,average= "macro") )
f1=sum(f1)/len(f1)
#if f1>0.76:
print('%-90s %-20s %-8s %-8s' % (a,f1,round(time()-second,3),i))
###Output
{'fit_prior': 'False', 'alpha': 8.111308307896873e-08} 0.5585527392583353 27.113 0
{'fit_prior': 'False', 'alpha': 5.336699231206313e-07} 0.5585321247002658 26.625 1
{'fit_prior': 'False', 'alpha': 1.873817422860383e-08} 0.5585237755311087 27.33 2
{'fit_prior': 'False', 'alpha': 2.310129700083158e-07} 0.5585787416692072 26.851 3
{'fit_prior': 'False', 'alpha': 1.232846739442066e-07} 0.5585527392583353 28.996 4
{'fit_prior': 'False', 'alpha': 5.336699231206313e-07} 0.5585321247002658 30.334 5
{'fit_prior': 'False', 'alpha': 1.873817422860383e-07} 0.5585787416692072 25.975 6
{'fit_prior': 'True', 'alpha': 2.848035868435799e-07} 0.5585787416692072 30.5 7
{'fit_prior': 'True', 'alpha': 8.111308307896856e-09} 0.5585237755311087 28.78 8
{'fit_prior': 'False', 'alpha': 8.111308307896872e-07} 0.5585042848171752 29.407 9
{'fit_prior': 'False', 'alpha': 1.232846739442066e-08} 0.5585237755311087 27.218 10
{'fit_prior': 'False', 'alpha': 1.873817422860383e-07} 0.5585787416692072 28.401 11
{'fit_prior': 'False', 'alpha': 1.519911082952933e-07} 0.5585684317597219 26.954 12
{'fit_prior': 'True', 'alpha': 1.873817422860383e-07} 0.5585787416692072 26.104 13
{'fit_prior': 'False', 'alpha': 1e-09} 0.5587468285337281 26.859 14
{'fit_prior': 'False', 'alpha': 2.310129700083158e-07} 0.5585787416692072 29.226 15
{'fit_prior': 'True', 'alpha': 1.873817422860387e-09} 0.5585322432937763 29.316 16
{'fit_prior': 'True', 'alpha': 4.3287612810830526e-07} 0.5585321247002658 28.82 17
{'fit_prior': 'True', 'alpha': 1.2328467394420635e-09} 0.5585322432937763 29.354 18
{'fit_prior': 'False', 'alpha': 1.2328467394420635e-09} 0.5585322432937763 27.082 19
{'fit_prior': 'False', 'alpha': 1e-09} 0.5587468285337281 27.493 20
{'fit_prior': 'True', 'alpha': 1e-09} 0.5587468285337281 30.35 21
{'fit_prior': 'True', 'alpha': 6.579332246575682e-07} 0.5585042848171752 30.642 22
{'fit_prior': 'False', 'alpha': 1.519911082952933e-07} 0.5585684317597219 27.113 23
{'fit_prior': 'False', 'alpha': 2.310129700083158e-07} 0.5585787416692072 25.374 24
{'fit_prior': 'True', 'alpha': 1e-09} 0.5587468285337281 24.098 25
{'fit_prior': 'True', 'alpha': 1.519911082952933e-07} 0.5585684317597219 25.978 26
{'fit_prior': 'False', 'alpha': 3.5111917342151273e-09} 0.5585237755311087 26.621 27
{'fit_prior': 'False', 'alpha': 1.2328467394420635e-09} 0.5585322432937763 27.302 28
{'fit_prior': 'False', 'alpha': 1.873817422860387e-09} 0.5585322432937763 27.096 29
{'fit_prior': 'False', 'alpha': 5.336699231206313e-06} 0.5583590427195376 27.005 30
{'fit_prior': 'True', 'alpha': 2.848035868435799e-07} 0.5585787416692072 28.666 31
{'fit_prior': 'False', 'alpha': 6.579332246575682e-08} 0.55850429035763 27.525 32
{'fit_prior': 'True', 'alpha': 3.5111917342151277e-07} 0.5585321247002658 24.992 33
{'fit_prior': 'True', 'alpha': 2.310129700083158e-07} 0.5585787416692072 23.676 34
{'fit_prior': 'True', 'alpha': 8.111308307896873e-08} 0.5585527392583353 24.551 35
{'fit_prior': 'True', 'alpha': 2.310129700083158e-07} 0.5585787416692072 23.799 36
{'fit_prior': 'True', 'alpha': 1e-08} 0.5585237755311087 24.209 37
{'fit_prior': 'True', 'alpha': 1.2328467394420635e-09} 0.5585322432937763 24.771 38
{'fit_prior': 'True', 'alpha': 1.519911082952933e-08} 0.5585237755311087 23.929 39
{'fit_prior': 'False', 'alpha': 2.848035868435799e-07} 0.5585787416692072 27.103 40
{'fit_prior': 'False', 'alpha': 1.519911082952933e-09} 0.5585322432937763 27.803 41
{'fit_prior': 'False', 'alpha': 5.336699231206313e-07} 0.5585321247002658 27.139 42
{'fit_prior': 'False', 'alpha': 5.336699231206313e-07} 0.5585321247002658 26.77 43
{'fit_prior': 'True', 'alpha': 1.873817422860383e-07} 0.5585787416692072 28.174 44
{'fit_prior': 'True', 'alpha': 8.111308307896873e-08} 0.5585527392583353 25.43 45
{'fit_prior': 'True', 'alpha': 1.873817422860383e-07} 0.5585787416692072 27.594 46
{'fit_prior': 'False', 'alpha': 1.873817422860387e-09} 0.5585322432937763 28.726 47
{'fit_prior': 'True', 'alpha': 1.232846739442066e-07} 0.5585527392583353 26.621 48
{'fit_prior': 'False', 'alpha': 8.111308307896873e-08} 0.5585527392583353 32.861 49
{'fit_prior': 'True', 'alpha': 1.873817422860383e-08} 0.5585237755311087 57.316 50
{'fit_prior': 'True', 'alpha': 1.519911082952933e-08} 0.5585237755311087 23.86 51
{'fit_prior': 'True', 'alpha': 2.310129700083158e-07} 0.5585787416692072 27.31 52
{'fit_prior': 'True', 'alpha': 2.310129700083158e-07} 0.5585787416692072 28.759 53
{'fit_prior': 'True', 'alpha': 1.232846739442066e-08} 0.5585237755311087 28.147 54
{'fit_prior': 'False', 'alpha': 5.336699231206313e-07} 0.5585321247002658 30.088 55
{'fit_prior': 'False', 'alpha': 1.519911082952933e-09} 0.5585322432937763 28.261 56
{'fit_prior': 'True', 'alpha': 1.232846739442066e-08} 0.5585237755311087 24.938 57
{'fit_prior': 'True', 'alpha': 1.873817422860387e-09} 0.5585322432937763 24.441 58
{'fit_prior': 'True', 'alpha': 3.5111917342151277e-08} 0.5585237755311087 26.641 59
{'fit_prior': 'True', 'alpha': 1.519911082952933e-07} 0.5585684317597219 25.333 60
{'fit_prior': 'False', 'alpha': 2.310129700083158e-07} 0.5585787416692072 25.032 61
{'fit_prior': 'False', 'alpha': 8.111308307896873e-08} 0.5585527392583353 25.714 62
{'fit_prior': 'True', 'alpha': 1.232846739442066e-07} 0.5585527392583353 26.345 63
###Markdown
------------- IoTSense- IoTsentinel
###Code
%matplotlib inline
from scipy.stats import randint as sp_randint
from scipy.stats import uniform
from scipy.stats import uniform as sp_randFloat
from sklearn import svm
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import RandomizedSearchCV
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from time import time
import numpy as np
import pandas as pd
import sklearn
import warnings
warnings.filterwarnings('ignore')
from scipy.stats import randint as sp_randInt
from sklearn.model_selection import GridSearchCV, PredefinedSplit
from sklearn.metrics import make_scorer
from scipy import sparse
###Output
_____no_output_____
###Markdown
IoTSentinel
###Code
df=pd.read_csv("Aalto_IoTSentinel_Train.csv")
df
df.columns
features= ['ARP', 'LLC', 'EAPOL', 'IP', 'ICMP', 'ICMP6', 'TCP', 'UDP', 'HTTP',
'HTTPS', 'DHCP', 'BOOTP', 'SSDP', 'DNS', 'MDNS', 'NTP', 'IP_padding',
'IP_add_count', 'IP_ralert', 'Portcl_src', 'Portcl_dst', 'Pck_size',
'Pck_rawdata', 'Label']
df=pd.read_csv("Aalto_IoTSentinel_Train.csv",usecols=features)
X_train = df.iloc[:,0:-1]
df['Label'] = df['Label'].astype('category')
y_train=df['Label'].cat.codes
df=pd.read_csv("Aalto_IoTSentinel_Test.csv",usecols=features)
X_test = df.iloc[:,0:-1]
df['Label'] = df['Label'].astype('category')
y_test=df['Label'].cat.codes
print(X_train.shape,X_test.shape)
X= np.concatenate([X_train, X_test])
test_fold = [-1 for _ in range(X_train.shape[0])] + [0 for _ in range(X_test.shape[0])]
y = np.concatenate([y_train, y_test])
ps = PredefinedSplit(test_fold)
def run_random_search(model, params, x_train, y_train):
#grid = GridSearchCV(model, params, cv = ps, n_jobs = -1, scoring = score, verbose = 0, refit = False)
grid =RandomizedSearchCV(model, param_grid, cv=ps,scoring = 'f1_macro')
grid.fit(x_train, y_train)
return (grid.best_params_, round(grid.best_score_,8),grid.best_estimator_)
###Output
_____no_output_____
###Markdown
RandomizedSearchCV DT
###Code
print ('%-90s %-20s %-8s %-8s' % ("HYPERPARAMETERS","F1 Score", "Time", "No"))
nfolds=10
param_grid = { 'criterion':['gini','entropy'],
"max_depth":np.linspace(1, 32, 32, endpoint=True),
"min_samples_split": sp_randint(2,10),#uniform(0.1,1 ),
# "min_samples_leafs" : np.linspace(0.1, 0.5, 5, endpoint=True)
"max_features" : sp_randint(1,X_train.shape[1])}
second=time()
f1=[]
clf=DecisionTreeClassifier()
for ii in range(25):
clf.fit(X_train, y_train)
predict =clf.predict(X_test)
f1.append(sklearn.metrics.f1_score(y_test, predict,average= "macro") )
f1=sum(f1)/len(f1)
#if f1>0.76:
print('%-90s %-20s %-8s %-8s' % ("default",f1,round(time()-second,3),ii))
for i in range(100):
second=time()
a,b,clf=run_random_search(DecisionTreeClassifier(),param_grid,X,y)
f1=[]
for ii in range(25):
clf.fit(X_train, y_train)
predict =clf.predict(X_test)
f1.append(sklearn.metrics.f1_score(y_test, predict,average= "macro") )
f1=sum(f1)/len(f1)
#if f1>0.76:
print('%-90s %-20s %-8s %-8s' % (a,f1,round(time()-second,3),i))
###Output
HYPERPARAMETERS F1 Score Time No
default 0.6009580335884174 3.297 24
{'criterion': 'entropy', 'max_depth': 29.0, 'max_features': 19, 'min_samples_split': 6} 0.6030508357696247 3.882 0
{'criterion': 'entropy', 'max_depth': 28.0, 'max_features': 11, 'min_samples_split': 2} 0.6019134446933325 2.7 1
{'criterion': 'entropy', 'max_depth': 30.0, 'max_features': 7, 'min_samples_split': 4} 0.6008899861597053 2.675 2
{'criterion': 'gini', 'max_depth': 25.0, 'max_features': 12, 'min_samples_split': 2} 0.6011518449043427 3.048 3
{'criterion': 'gini', 'max_depth': 29.0, 'max_features': 14, 'min_samples_split': 4} 0.6019271172312137 3.166 4
{'criterion': 'entropy', 'max_depth': 21.0, 'max_features': 8, 'min_samples_split': 5} 0.600776034294108 2.699 5
{'criterion': 'entropy', 'max_depth': 32.0, 'max_features': 13, 'min_samples_split': 4} 0.6012155830985823 3.5 6
{'criterion': 'entropy', 'max_depth': 26.0, 'max_features': 12, 'min_samples_split': 4} 0.6007169103536598 3.149 7
{'criterion': 'gini', 'max_depth': 32.0, 'max_features': 20, 'min_samples_split': 4} 0.6011493068164223 3.807 8
{'criterion': 'gini', 'max_depth': 25.0, 'max_features': 6, 'min_samples_split': 5} 0.6013879377666946 2.508 9
{'criterion': 'gini', 'max_depth': 21.0, 'max_features': 13, 'min_samples_split': 8} 0.6001132367734511 3.339 10
{'criterion': 'entropy', 'max_depth': 29.0, 'max_features': 8, 'min_samples_split': 5} 0.6011763041325195 2.934 11
{'criterion': 'entropy', 'max_depth': 28.0, 'max_features': 12, 'min_samples_split': 6} 0.6021024566596548 3.493 12
{'criterion': 'entropy', 'max_depth': 20.0, 'max_features': 19, 'min_samples_split': 9} 0.6016383576743459 3.964 13
{'criterion': 'gini', 'max_depth': 29.0, 'max_features': 20, 'min_samples_split': 2} 0.6020742363039984 3.852 14
{'criterion': 'entropy', 'max_depth': 19.0, 'max_features': 11, 'min_samples_split': 6} 0.6020211030396712 3.283 15
{'criterion': 'entropy', 'max_depth': 23.0, 'max_features': 14, 'min_samples_split': 8} 0.6018172305951058 3.374 16
{'criterion': 'gini', 'max_depth': 26.0, 'max_features': 17, 'min_samples_split': 6} 0.6014710326768737 3.63 17
{'criterion': 'gini', 'max_depth': 19.0, 'max_features': 4, 'min_samples_split': 3} 0.5968674903457019 2.483 18
{'criterion': 'entropy', 'max_depth': 23.0, 'max_features': 20, 'min_samples_split': 8} 0.6022104439491639 4.286 19
{'criterion': 'gini', 'max_depth': 26.0, 'max_features': 7, 'min_samples_split': 4} 0.6015693137161562 2.792 20
{'criterion': 'gini', 'max_depth': 22.0, 'max_features': 14, 'min_samples_split': 6} 0.6017949216348957 3.074 21
{'criterion': 'entropy', 'max_depth': 29.0, 'max_features': 19, 'min_samples_split': 6} 0.6024430151083207 3.692 22
{'criterion': 'gini', 'max_depth': 28.0, 'max_features': 5, 'min_samples_split': 5} 0.6022739364798936 2.498 23
{'criterion': 'gini', 'max_depth': 23.0, 'max_features': 21, 'min_samples_split': 7} 0.6016857134025669 3.667 24
{'criterion': 'entropy', 'max_depth': 27.0, 'max_features': 13, 'min_samples_split': 3} 0.6015151763417111 3.33 25
{'criterion': 'gini', 'max_depth': 22.0, 'max_features': 3, 'min_samples_split': 5} 0.5998416932351841 2.64 26
{'criterion': 'entropy', 'max_depth': 29.0, 'max_features': 9, 'min_samples_split': 7} 0.603140806980107 3.021 27
{'criterion': 'entropy', 'max_depth': 30.0, 'max_features': 17, 'min_samples_split': 7} 0.6029479447390311 3.835 28
{'criterion': 'gini', 'max_depth': 31.0, 'max_features': 11, 'min_samples_split': 5} 0.6013014797072703 3.124 29
{'criterion': 'entropy', 'max_depth': 25.0, 'max_features': 21, 'min_samples_split': 9} 0.6019580675893905 3.98 30
{'criterion': 'gini', 'max_depth': 25.0, 'max_features': 12, 'min_samples_split': 4} 0.6010475693032766 3.181 31
{'criterion': 'entropy', 'max_depth': 27.0, 'max_features': 9, 'min_samples_split': 3} 0.6018394030497072 2.939 32
{'criterion': 'entropy', 'max_depth': 24.0, 'max_features': 12, 'min_samples_split': 3} 0.6012430819654145 3.022 33
{'criterion': 'gini', 'max_depth': 19.0, 'max_features': 9, 'min_samples_split': 8} 0.5985461479436998 2.964 34
{'criterion': 'entropy', 'max_depth': 28.0, 'max_features': 8, 'min_samples_split': 7} 0.601039893545091 3.005 35
{'criterion': 'gini', 'max_depth': 24.0, 'max_features': 20, 'min_samples_split': 8} 0.6007605099676454 3.967 36
{'criterion': 'gini', 'max_depth': 22.0, 'max_features': 8, 'min_samples_split': 5} 0.6008745629513287 2.611 37
{'criterion': 'gini', 'max_depth': 32.0, 'max_features': 7, 'min_samples_split': 2} 0.6024562181865079 2.861 38
{'criterion': 'entropy', 'max_depth': 21.0, 'max_features': 4, 'min_samples_split': 3} 0.6023181079153617 2.604 39
{'criterion': 'entropy', 'max_depth': 21.0, 'max_features': 21, 'min_samples_split': 7} 0.6029429057703155 4.212 40
{'criterion': 'entropy', 'max_depth': 27.0, 'max_features': 18, 'min_samples_split': 7} 0.60174529230521 3.86 41
{'criterion': 'gini', 'max_depth': 23.0, 'max_features': 13, 'min_samples_split': 3} 0.6016370279207668 3.332 42
{'criterion': 'gini', 'max_depth': 26.0, 'max_features': 5, 'min_samples_split': 4} 0.6011680084683833 2.629 43
{'criterion': 'entropy', 'max_depth': 26.0, 'max_features': 7, 'min_samples_split': 3} 0.6021200172252105 2.691 44
{'criterion': 'gini', 'max_depth': 23.0, 'max_features': 12, 'min_samples_split': 2} 0.6013955748810014 3.242 45
{'criterion': 'entropy', 'max_depth': 24.0, 'max_features': 22, 'min_samples_split': 8} 0.6025412911227193 3.791 46
{'criterion': 'gini', 'max_depth': 31.0, 'max_features': 6, 'min_samples_split': 5} 0.6019437360503138 2.675 47
{'criterion': 'entropy', 'max_depth': 18.0, 'max_features': 12, 'min_samples_split': 9} 0.5991803543717217 2.868 48
{'criterion': 'gini', 'max_depth': 25.0, 'max_features': 21, 'min_samples_split': 6} 0.6014563832507143 3.87 49
{'criterion': 'entropy', 'max_depth': 21.0, 'max_features': 21, 'min_samples_split': 3} 0.602723854653127 3.975 50
{'criterion': 'gini', 'max_depth': 26.0, 'max_features': 16, 'min_samples_split': 5} 0.6019333950583351 3.25 51
{'criterion': 'gini', 'max_depth': 26.0, 'max_features': 6, 'min_samples_split': 7} 0.6009520232475655 2.597 52
{'criterion': 'gini', 'max_depth': 27.0, 'max_features': 9, 'min_samples_split': 2} 0.6015812662827617 3.174 53
{'criterion': 'gini', 'max_depth': 20.0, 'max_features': 5, 'min_samples_split': 2} 0.5997503129299953 2.637 54
{'criterion': 'gini', 'max_depth': 30.0, 'max_features': 3, 'min_samples_split': 3} 0.6025699515987437 2.563 55
{'criterion': 'entropy', 'max_depth': 18.0, 'max_features': 5, 'min_samples_split': 6} 0.5997699179427607 2.413 56
{'criterion': 'gini', 'max_depth': 22.0, 'max_features': 9, 'min_samples_split': 5} 0.6011193171473207 2.811 57
{'criterion': 'entropy', 'max_depth': 19.0, 'max_features': 17, 'min_samples_split': 2} 0.6011052786908276 3.656 58
{'criterion': 'gini', 'max_depth': 25.0, 'max_features': 14, 'min_samples_split': 9} 0.5996343767546538 3.469 59
{'criterion': 'gini', 'max_depth': 32.0, 'max_features': 3, 'min_samples_split': 4} 0.6013376013918397 2.458 60
{'criterion': 'gini', 'max_depth': 26.0, 'max_features': 19, 'min_samples_split': 8} 0.6007069584492748 3.748 61
###Markdown
IoT Sense
###Code
df=pd.read_csv("Aalto_IoTSense_Train.csv")
df
df.columns
features= ['ARP', 'EAPOL', 'IP', 'ICMP', 'ICMP6', 'TCP', 'UDP', 'TCP_w_size',
'HTTP', 'HTTPS', 'DHCP', 'BOOTP', 'SSDP', 'DNS', 'MDNS', 'NTP',
'IP_padding', 'IP_ralert', 'payload_l', 'Entropy', 'Label']
df=pd.read_csv("Aalto_IoTSense_Train.csv",usecols=features)
X_train = df.iloc[:,0:-1]
df['Label'] = df['Label'].astype('category')
y_train=df['Label'].cat.codes
df=pd.read_csv("Aalto_IoTSense_Test.csv",usecols=features)
X_test = df.iloc[:,0:-1]
df['Label'] = df['Label'].astype('category')
y_test=df['Label'].cat.codes
print(X_train.shape,X_test.shape)
X= np.concatenate([X_train, X_test])
test_fold = [-1 for _ in range(X_train.shape[0])] + [0 for _ in range(X_test.shape[0])]
y = np.concatenate([y_train, y_test])
ps = PredefinedSplit(test_fold)
def run_random_search(model, params, x_train, y_train):
#grid = GridSearchCV(model, params, cv = ps, n_jobs = -1, scoring = score, verbose = 0, refit = False)
grid =RandomizedSearchCV(model, param_grid, cv=ps,scoring = 'f1_macro')
grid.fit(x_train, y_train)
return (grid.best_params_, round(grid.best_score_,8),grid.best_estimator_)
###Output
_____no_output_____
###Markdown
RandomizedSearchCV DT
###Code
print ('%-90s %-20s %-8s %-8s' % ("HYPERPARAMETERS","F1 Score", "Time", "No"))
nfolds=10
param_grid = { 'criterion':['gini','entropy'],
"max_depth":np.linspace(1, 32, 32, endpoint=True),
"min_samples_split": sp_randint(2,10),#uniform(0.1,1 ),
# "min_samples_leafs" : np.linspace(0.1, 0.5, 5, endpoint=True)
"max_features" : sp_randint(1,X_train.shape[1])}
second=time()
f1=[]
clf=DecisionTreeClassifier()
for ii in range(25):
clf.fit(X_train, y_train)
predict =clf.predict(X_test)
f1.append(sklearn.metrics.f1_score(y_test, predict,average= "macro") )
f1=sum(f1)/len(f1)
#if f1>0.76:
print('%-90s %-20s %-8s %-8s' % ("default",f1,round(time()-second,3),ii))
for i in range(100):
second=time()
a,b,clf=run_random_search(DecisionTreeClassifier(),param_grid,X,y)
f1=[]
for ii in range(25):
clf.fit(X_train, y_train)
predict =clf.predict(X_test)
f1.append(sklearn.metrics.f1_score(y_test, predict,average= "macro") )
f1=sum(f1)/len(f1)
#if f1>0.76:
print('%-90s %-20s %-8s %-8s' % (a,f1,round(time()-second,3),i))
###Output
HYPERPARAMETERS F1 Score Time No
default 0.558691645008419 3.763 24
{'criterion': 'entropy', 'max_depth': 29.0, 'max_features': 15, 'min_samples_split': 2} 0.5595939405707852 5.056 0
{'criterion': 'gini', 'max_depth': 27.0, 'max_features': 17, 'min_samples_split': 5} 0.5598395436693392 4.321 1
{'criterion': 'gini', 'max_depth': 27.0, 'max_features': 11, 'min_samples_split': 5} 0.5588452549829972 3.867 2
{'criterion': 'gini', 'max_depth': 30.0, 'max_features': 9, 'min_samples_split': 2} 0.558784869621737 4.126 3
{'criterion': 'gini', 'max_depth': 22.0, 'max_features': 10, 'min_samples_split': 7} 0.5584838920603935 3.171 4
{'criterion': 'entropy', 'max_depth': 26.0, 'max_features': 17, 'min_samples_split': 3} 0.5594077404103128 5.856 5
{'criterion': 'entropy', 'max_depth': 20.0, 'max_features': 6, 'min_samples_split': 2} 0.5578979723451639 4.352 6
{'criterion': 'entropy', 'max_depth': 22.0, 'max_features': 19, 'min_samples_split': 5} 0.560104098992348 5.024 7
{'criterion': 'entropy', 'max_depth': 30.0, 'max_features': 15, 'min_samples_split': 7} 0.5589826281197542 4.628 8
{'criterion': 'entropy', 'max_depth': 25.0, 'max_features': 10, 'min_samples_split': 3} 0.5578691798198041 3.807 9
{'criterion': 'entropy', 'max_depth': 22.0, 'max_features': 19, 'min_samples_split': 4} 0.5599425244566223 5.04 10
{'criterion': 'gini', 'max_depth': 31.0, 'max_features': 8, 'min_samples_split': 2} 0.5588144386309883 3.277 11
{'criterion': 'gini', 'max_depth': 29.0, 'max_features': 10, 'min_samples_split': 5} 0.5591740766338967 3.24 12
{'criterion': 'entropy', 'max_depth': 20.0, 'max_features': 12, 'min_samples_split': 8} 0.5584870456857225 4.284 13
{'criterion': 'entropy', 'max_depth': 17.0, 'max_features': 12, 'min_samples_split': 2} 0.557735271771046 4.3 14
{'criterion': 'gini', 'max_depth': 19.0, 'max_features': 11, 'min_samples_split': 9} 0.5566510739652359 3.45 15
{'criterion': 'gini', 'max_depth': 23.0, 'max_features': 19, 'min_samples_split': 3} 0.5594735232503032 4.289 16
{'criterion': 'gini', 'max_depth': 25.0, 'max_features': 16, 'min_samples_split': 4} 0.5589298593903004 4.128 17
{'criterion': 'entropy', 'max_depth': 21.0, 'max_features': 19, 'min_samples_split': 4} 0.5602022199981592 5.156 18
{'criterion': 'entropy', 'max_depth': 18.0, 'max_features': 18, 'min_samples_split': 4} 0.5590846382513375 4.714 19
{'criterion': 'entropy', 'max_depth': 21.0, 'max_features': 17, 'min_samples_split': 3} 0.5597057425436107 5.141 20
{'criterion': 'gini', 'max_depth': 32.0, 'max_features': 10, 'min_samples_split': 2} 0.5592884775615491 3.736 21
{'criterion': 'gini', 'max_depth': 23.0, 'max_features': 14, 'min_samples_split': 4} 0.5581566203445458 3.984 22
{'criterion': 'gini', 'max_depth': 22.0, 'max_features': 6, 'min_samples_split': 2} 0.557453022529816 3.228 23
{'criterion': 'entropy', 'max_depth': 20.0, 'max_features': 19, 'min_samples_split': 7} 0.5608599075029683 5.074 24
{'criterion': 'gini', 'max_depth': 22.0, 'max_features': 7, 'min_samples_split': 6} 0.5586587463390914 3.311 25
{'criterion': 'gini', 'max_depth': 20.0, 'max_features': 17, 'min_samples_split': 7} 0.5588660384128833 3.979 26
{'criterion': 'entropy', 'max_depth': 31.0, 'max_features': 14, 'min_samples_split': 6} 0.559312407927916 4.668 27
{'criterion': 'gini', 'max_depth': 21.0, 'max_features': 14, 'min_samples_split': 3} 0.5586897649017216 3.761 28
{'criterion': 'entropy', 'max_depth': 18.0, 'max_features': 11, 'min_samples_split': 3} 0.5579665105908079 4.155 29
{'criterion': 'gini', 'max_depth': 32.0, 'max_features': 8, 'min_samples_split': 6} 0.5593412054537021 3.308 30
{'criterion': 'gini', 'max_depth': 26.0, 'max_features': 12, 'min_samples_split': 6} 0.5596784723687197 3.674 31
{'criterion': 'entropy', 'max_depth': 27.0, 'max_features': 10, 'min_samples_split': 2} 0.5593189676546773 4.357 32
{'criterion': 'entropy', 'max_depth': 28.0, 'max_features': 14, 'min_samples_split': 8} 0.55838562534971 4.617 33
{'criterion': 'gini', 'max_depth': 27.0, 'max_features': 1, 'min_samples_split': 2} 0.5577650446097624 3.211 34
{'criterion': 'entropy', 'max_depth': 28.0, 'max_features': 18, 'min_samples_split': 6} 0.5600714036121163 5.242 35
{'criterion': 'gini', 'max_depth': 26.0, 'max_features': 7, 'min_samples_split': 6} 0.5585427420728055 3.198 36
{'criterion': 'gini', 'max_depth': 31.0, 'max_features': 12, 'min_samples_split': 2} 0.5590093924223245 3.926 37
{'criterion': 'gini', 'max_depth': 30.0, 'max_features': 17, 'min_samples_split': 6} 0.5601038879511053 4.011 38
{'criterion': 'entropy', 'max_depth': 18.0, 'max_features': 19, 'min_samples_split': 9} 0.5599754145942392 5.115 39
{'criterion': 'gini', 'max_depth': 25.0, 'max_features': 16, 'min_samples_split': 8} 0.5583923741615665 4.255 40
{'criterion': 'entropy', 'max_depth': 21.0, 'max_features': 10, 'min_samples_split': 3} 0.5575411779984907 4.314 41
{'criterion': 'gini', 'max_depth': 28.0, 'max_features': 15, 'min_samples_split': 6} 0.5599309334421654 4.362 42
{'criterion': 'gini', 'max_depth': 30.0, 'max_features': 16, 'min_samples_split': 9} 0.5582932766894707 4.177 43
{'criterion': 'gini', 'max_depth': 20.0, 'max_features': 10, 'min_samples_split': 8} 0.5572775116595065 3.099 44
{'criterion': 'gini', 'max_depth': 27.0, 'max_features': 16, 'min_samples_split': 7} 0.5587037658238144 4.133 45
{'criterion': 'entropy', 'max_depth': 27.0, 'max_features': 10, 'min_samples_split': 3} 0.5582054018609128 4.351 46
{'criterion': 'gini', 'max_depth': 27.0, 'max_features': 15, 'min_samples_split': 3} 0.5585986325141291 3.965 47
{'criterion': 'entropy', 'max_depth': 18.0, 'max_features': 15, 'min_samples_split': 3} 0.558407787064271 5.62 48
{'criterion': 'entropy', 'max_depth': 28.0, 'max_features': 19, 'min_samples_split': 6} 0.5601370285703191 6.537 49
{'criterion': 'gini', 'max_depth': 22.0, 'max_features': 19, 'min_samples_split': 4} 0.5600779171190392 4.643 50
{'criterion': 'entropy', 'max_depth': 22.0, 'max_features': 15, 'min_samples_split': 3} 0.5588715078396503 4.686 51
{'criterion': 'gini', 'max_depth': 25.0, 'max_features': 15, 'min_samples_split': 6} 0.559884329460211 3.843 52
{'criterion': 'gini', 'max_depth': 25.0, 'max_features': 10, 'min_samples_split': 2} 0.559055485102379 3.753 53
{'criterion': 'entropy', 'max_depth': 17.0, 'max_features': 14, 'min_samples_split': 2} 0.5582864722741846 4.686 54
{'criterion': 'entropy', 'max_depth': 21.0, 'max_features': 17, 'min_samples_split': 2} 0.560359852182515 5.06 55
{'criterion': 'entropy', 'max_depth': 26.0, 'max_features': 15, 'min_samples_split': 3} 0.5584946497657707 4.855 56
{'criterion': 'gini', 'max_depth': 26.0, 'max_features': 19, 'min_samples_split': 6} 0.5601168896783831 4.718 57
{'criterion': 'entropy', 'max_depth': 18.0, 'max_features': 18, 'min_samples_split': 9} 0.5596546064605892 4.992 58
{'criterion': 'gini', 'max_depth': 27.0, 'max_features': 15, 'min_samples_split': 2} 0.5593275073070032 3.931 59
{'criterion': 'entropy', 'max_depth': 25.0, 'max_features': 16, 'min_samples_split': 2} 0.560227409908011 5.131 60
{'criterion': 'entropy', 'max_depth': 17.0, 'max_features': 17, 'min_samples_split': 7} 0.55894843824888 4.95 61
###Markdown
________________ UNSW
###Code
%matplotlib inline
from scipy.stats import randint as sp_randint
from scipy.stats import uniform
from scipy.stats import uniform as sp_randFloat
from sklearn import svm
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import RandomizedSearchCV
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from time import time
import numpy as np
import pandas as pd
import sklearn
import warnings
warnings.filterwarnings('ignore')
from scipy.stats import randint as sp_randInt
from sklearn.model_selection import GridSearchCV, PredefinedSplit
from sklearn.metrics import make_scorer
from scipy import sparse
###Output
_____no_output_____
###Markdown
IoTSentinel
###Code
df=pd.read_csv("UNSW_IoTSentinel_Train.csv")
df
df.columns
features= ['ARP', 'LLC', 'EAPOL', 'IP', 'ICMP', 'ICMP6', 'TCP', 'UDP', 'HTTP',
'HTTPS', 'DHCP', 'BOOTP', 'SSDP', 'DNS', 'MDNS', 'NTP', 'IP_padding',
'IP_add_count', 'IP_ralert', 'Portcl_src', 'Portcl_dst', 'Pck_size',
'Pck_rawdata', 'Label']
df=pd.read_csv("UNSW_IoTSentinel_Train.csv",usecols=features)
X_train = df.iloc[:,0:-1]
df['Label'] = df['Label'].astype('category')
y_train=df['Label'].cat.codes
df=pd.read_csv("UNSW_IoTSentinel_Test.csv",usecols=features)
X_test = df.iloc[:,0:-1]
df['Label'] = df['Label'].astype('category')
y_test=df['Label'].cat.codes
print(X_train.shape,X_test.shape)
X= np.concatenate([X_train, X_test])
test_fold = [-1 for _ in range(X_train.shape[0])] + [0 for _ in range(X_test.shape[0])]
y = np.concatenate([y_train, y_test])
ps = PredefinedSplit(test_fold)
def run_random_search(model, params, x_train, y_train):
#grid = GridSearchCV(model, params, cv = ps, n_jobs = -1, scoring = score, verbose = 0, refit = False)
grid =RandomizedSearchCV(model, param_grid, cv=ps,scoring = 'f1_macro')
grid.fit(x_train, y_train)
return (grid.best_params_, round(grid.best_score_,8),grid.best_estimator_)
###Output
_____no_output_____
###Markdown
RandomizedSearchCV DT
###Code
print ('%-90s %-20s %-8s %-8s' % ("HYPERPARAMETERS","F1 Score", "Time", "No"))
nfolds=10
param_grid = { 'criterion':['gini','entropy'],
"max_depth":np.linspace(1, 32, 32, endpoint=True),
"min_samples_split": sp_randint(2,10),#uniform(0.1,1 ),
# "min_samples_leafs" : np.linspace(0.1, 0.5, 5, endpoint=True)
"max_features" : sp_randint(1,X_train.shape[1])}
second=time()
f1=[]
clf=DecisionTreeClassifier()
for ii in range(25):
clf.fit(X_train, y_train)
predict =clf.predict(X_test)
f1.append(sklearn.metrics.f1_score(y_test, predict,average= "macro") )
f1=sum(f1)/len(f1)
#if f1>0.76:
print('%-90s %-20s %-8s %-8s' % ("default",f1,round(time()-second,3),ii))
for i in range(100):
second=time()
a,b,clf=run_random_search(DecisionTreeClassifier(),param_grid,X,y)
f1=[]
for ii in range(25):
clf.fit(X_train, y_train)
predict =clf.predict(X_test)
f1.append(sklearn.metrics.f1_score(y_test, predict,average= "macro") )
f1=sum(f1)/len(f1)
#if f1>0.76:
print('%-90s %-20s %-8s %-8s' % (a,f1,round(time()-second,3),i))
###Output
HYPERPARAMETERS F1 Score Time No
default 0.504726029658401 13.599 24
{'criterion': 'entropy', 'max_depth': 27.0, 'max_features': 6, 'min_samples_split': 5} 0.5011076161175921 10.733 0
{'criterion': 'gini', 'max_depth': 31.0, 'max_features': 1, 'min_samples_split': 4} 0.5099407835954021 10.322 1
{'criterion': 'gini', 'max_depth': 27.0, 'max_features': 20, 'min_samples_split': 7} 0.5034832751111291 17.245 2
{'criterion': 'gini', 'max_depth': 14.0, 'max_features': 8, 'min_samples_split': 8} 0.4840597514969355 11.02 3
{'criterion': 'gini', 'max_depth': 16.0, 'max_features': 8, 'min_samples_split': 9} 0.4902067490058035 12.682 4
{'criterion': 'entropy', 'max_depth': 20.0, 'max_features': 12, 'min_samples_split': 2} 0.5016726208159694 12.441 5
{'criterion': 'gini', 'max_depth': 27.0, 'max_features': 7, 'min_samples_split': 9} 0.5034736913310647 10.913 6
{'criterion': 'gini', 'max_depth': 16.0, 'max_features': 8, 'min_samples_split': 6} 0.5008376244569992 12.164 7
{'criterion': 'entropy', 'max_depth': 32.0, 'max_features': 7, 'min_samples_split': 3} 0.5039092700355589 10.641 8
{'criterion': 'entropy', 'max_depth': 28.0, 'max_features': 15, 'min_samples_split': 6} 0.4996327590271571 16.35 9
{'criterion': 'gini', 'max_depth': 18.0, 'max_features': 11, 'min_samples_split': 4} 0.5061354111005771 12.233 10
{'criterion': 'entropy', 'max_depth': 15.0, 'max_features': 4, 'min_samples_split': 5} 0.4986113240344758 9.395 11
{'criterion': 'gini', 'max_depth': 25.0, 'max_features': 20, 'min_samples_split': 7} 0.5050065394085064 18.013 12
{'criterion': 'entropy', 'max_depth': 22.0, 'max_features': 12, 'min_samples_split': 7} 0.4960164778500743 12.823 13
{'criterion': 'gini', 'max_depth': 27.0, 'max_features': 7, 'min_samples_split': 3} 0.5075217944059709 12.056 14
{'criterion': 'gini', 'max_depth': 22.0, 'max_features': 1, 'min_samples_split': 4} 0.5077250843318716 10.546 15
{'criterion': 'entropy', 'max_depth': 21.0, 'max_features': 11, 'min_samples_split': 5} 0.4986665069490616 12.909 16
{'criterion': 'gini', 'max_depth': 29.0, 'max_features': 1, 'min_samples_split': 2} 0.5100791293821751 9.719 17
{'criterion': 'entropy', 'max_depth': 27.0, 'max_features': 7, 'min_samples_split': 5} 0.5006937243444661 12.388 18
{'criterion': 'gini', 'max_depth': 18.0, 'max_features': 10, 'min_samples_split': 2} 0.5038110574780196 13.683 19
{'criterion': 'entropy', 'max_depth': 19.0, 'max_features': 4, 'min_samples_split': 3} 0.5081433291552055 10.394 20
{'criterion': 'gini', 'max_depth': 16.0, 'max_features': 17, 'min_samples_split': 2} 0.5029991392838162 14.67 21
{'criterion': 'entropy', 'max_depth': 27.0, 'max_features': 13, 'min_samples_split': 2} 0.4986205079473723 12.995 22
{'criterion': 'gini', 'max_depth': 20.0, 'max_features': 15, 'min_samples_split': 6} 0.5062511840151633 18.285 23
{'criterion': 'gini', 'max_depth': 32.0, 'max_features': 6, 'min_samples_split': 7} 0.5090669921246591 12.73 24
{'criterion': 'gini', 'max_depth': 15.0, 'max_features': 8, 'min_samples_split': 5} 0.4906767680208231 12.137 25
{'criterion': 'gini', 'max_depth': 31.0, 'max_features': 7, 'min_samples_split': 5} 0.5070183295947354 11.813 26
{'criterion': 'entropy', 'max_depth': 29.0, 'max_features': 3, 'min_samples_split': 2} 0.5085798825005508 10.277 27
{'criterion': 'gini', 'max_depth': 32.0, 'max_features': 2, 'min_samples_split': 2} 0.5119003823868639 10.773 28
{'criterion': 'gini', 'max_depth': 32.0, 'max_features': 22, 'min_samples_split': 3} 0.5044494289399762 20.797 29
{'criterion': 'gini', 'max_depth': 23.0, 'max_features': 8, 'min_samples_split': 6} 0.5054691586463822 11.663 30
{'criterion': 'gini', 'max_depth': 31.0, 'max_features': 6, 'min_samples_split': 5} 0.5091749882257619 11.075 31
{'criterion': 'entropy', 'max_depth': 24.0, 'max_features': 20, 'min_samples_split': 5} 0.5012757402327029 16.771 32
{'criterion': 'entropy', 'max_depth': 16.0, 'max_features': 7, 'min_samples_split': 3} 0.5031779087491085 10.291 33
{'criterion': 'entropy', 'max_depth': 11.0, 'max_features': 19, 'min_samples_split': 4} 0.4911663108778608 13.677 34
{'criterion': 'entropy', 'max_depth': 20.0, 'max_features': 5, 'min_samples_split': 6} 0.5015989686107943 9.756 35
{'criterion': 'entropy', 'max_depth': 27.0, 'max_features': 10, 'min_samples_split': 7} 0.4965797407924477 12.52 36
{'criterion': 'gini', 'max_depth': 32.0, 'max_features': 21, 'min_samples_split': 7} 0.5033311331482704 18.193 37
{'criterion': 'entropy', 'max_depth': 22.0, 'max_features': 6, 'min_samples_split': 5} 0.504662660944637 10.959 38
{'criterion': 'gini', 'max_depth': 15.0, 'max_features': 6, 'min_samples_split': 6} 0.4872157537399422 10.647 39
{'criterion': 'entropy', 'max_depth': 15.0, 'max_features': 3, 'min_samples_split': 4} 0.4982487639434857 9.373 40
{'criterion': 'gini', 'max_depth': 17.0, 'max_features': 10, 'min_samples_split': 6} 0.5050965370624528 11.397 41
{'criterion': 'entropy', 'max_depth': 22.0, 'max_features': 5, 'min_samples_split': 2} 0.5026617658618661 10.701 42
{'criterion': 'entropy', 'max_depth': 26.0, 'max_features': 20, 'min_samples_split': 4} 0.500004790575644 16.968 43
{'criterion': 'entropy', 'max_depth': 19.0, 'max_features': 18, 'min_samples_split': 6} 0.49891426702570124 16.496 44
{'criterion': 'entropy', 'max_depth': 32.0, 'max_features': 15, 'min_samples_split': 8} 0.4944616282131263 17.819 45
{'criterion': 'entropy', 'max_depth': 22.0, 'max_features': 13, 'min_samples_split': 6} 0.503500611071228 14.531 46
{'criterion': 'gini', 'max_depth': 27.0, 'max_features': 2, 'min_samples_split': 9} 0.5071728953550535 12.655 47
{'criterion': 'gini', 'max_depth': 28.0, 'max_features': 14, 'min_samples_split': 8} 0.5078407122747342 18.85 48
{'criterion': 'entropy', 'max_depth': 31.0, 'max_features': 18, 'min_samples_split': 3} 0.5013723903832534 18.582 49
{'criterion': 'entropy', 'max_depth': 32.0, 'max_features': 4, 'min_samples_split': 5} 0.5047723891239894 14.321 50
{'criterion': 'entropy', 'max_depth': 32.0, 'max_features': 13, 'min_samples_split': 9} 0.5004311190785782 18.093 51
{'criterion': 'entropy', 'max_depth': 29.0, 'max_features': 9, 'min_samples_split': 6} 0.5011535971703264 14.24 52
{'criterion': 'gini', 'max_depth': 27.0, 'max_features': 8, 'min_samples_split': 4} 0.5092393261112336 11.935 53
{'criterion': 'gini', 'max_depth': 24.0, 'max_features': 1, 'min_samples_split': 3} 0.5122428509301188 10.181 54
{'criterion': 'entropy', 'max_depth': 17.0, 'max_features': 15, 'min_samples_split': 7} 0.5019485168697276 17.64 55
{'criterion': 'gini', 'max_depth': 16.0, 'max_features': 13, 'min_samples_split': 8} 0.49506843259753497 14.223 56
{'criterion': 'entropy', 'max_depth': 14.0, 'max_features': 2, 'min_samples_split': 6} 0.4904174980600612 10.34 57
{'criterion': 'gini', 'max_depth': 20.0, 'max_features': 1, 'min_samples_split': 4} 0.5079040871241363 12.143 58
{'criterion': 'entropy', 'max_depth': 15.0, 'max_features': 22, 'min_samples_split': 7} 0.49945671873638003 21.553 59
{'criterion': 'gini', 'max_depth': 17.0, 'max_features': 16, 'min_samples_split': 5} 0.5043649900323741 18.325 60
{'criterion': 'entropy', 'max_depth': 30.0, 'max_features': 18, 'min_samples_split': 3} 0.5006713367493819 19.049 61
###Markdown
IoT Sense
###Code
df=pd.read_csv("UNSW_IoTSense_Train.csv")
df
df.columns
features= ['ARP', 'EAPOL', 'IP', 'ICMP', 'ICMP6', 'TCP', 'UDP', 'TCP_w_size',
'HTTP', 'HTTPS', 'DHCP', 'BOOTP', 'SSDP', 'DNS', 'MDNS', 'NTP',
'IP_padding', 'IP_ralert', 'payload_l', 'Entropy', 'Label']
df=pd.read_csv("UNSW_IoTSense_Train.csv",usecols=features)
X_train = df.iloc[:,0:-1]
df['Label'] = df['Label'].astype('category')
y_train=df['Label'].cat.codes
df=pd.read_csv("UNSW_IoTSense_Test.csv",usecols=features)
X_test = df.iloc[:,0:-1]
df['Label'] = df['Label'].astype('category')
y_test=df['Label'].cat.codes
print(X_train.shape,X_test.shape)
X= np.concatenate([X_train, X_test])
test_fold = [-1 for _ in range(X_train.shape[0])] + [0 for _ in range(X_test.shape[0])]
y = np.concatenate([y_train, y_test])
ps = PredefinedSplit(test_fold)
def run_random_search(model, params, x_train, y_train):
#grid = GridSearchCV(model, params, cv = ps, n_jobs = -1, scoring = score, verbose = 0, refit = False)
grid =RandomizedSearchCV(model, param_grid, cv=ps,scoring = 'f1_macro')
grid.fit(x_train, y_train)
return (grid.best_params_, round(grid.best_score_,8),grid.best_estimator_)
###Output
_____no_output_____
###Markdown
RandomizedSearchCV DT
###Code
print ('%-90s %-20s %-8s %-8s' % ("HYPERPARAMETERS","F1 Score", "Time", "No"))
nfolds=10
param_grid = { 'criterion':['gini','entropy'],
"max_depth":np.linspace(1, 32, 32, endpoint=True),
"min_samples_split": sp_randint(2,10),#uniform(0.1,1 ),
# "min_samples_leafs" : np.linspace(0.1, 0.5, 5, endpoint=True)
"max_features" : sp_randint(1,X_train.shape[1])}
second=time()
f1=[]
clf=DecisionTreeClassifier()
for ii in range(25):
clf.fit(X_train, y_train)
predict =clf.predict(X_test)
f1.append(sklearn.metrics.f1_score(y_test, predict,average= "macro") )
f1=sum(f1)/len(f1)
#if f1>0.76:
print('%-90s %-20s %-8s %-8s' % ("default",f1,round(time()-second,3),ii))
for i in range(100):
second=time()
a,b,clf=run_random_search(DecisionTreeClassifier(),param_grid,X,y)
f1=[]
for ii in range(25):
clf.fit(X_train, y_train)
predict =clf.predict(X_test)
f1.append(sklearn.metrics.f1_score(y_test, predict,average= "macro") )
f1=sum(f1)/len(f1)
#if f1>0.76:
print('%-90s %-20s %-8s %-8s' % (a,f1,round(time()-second,3),i))
###Output
HYPERPARAMETERS F1 Score Time No
default 0.7001257412876442 12.571 24
{'criterion': 'entropy', 'max_depth': 26.0, 'max_features': 10, 'min_samples_split': 2} 0.6962463666427438 16.541 0
{'criterion': 'gini', 'max_depth': 31.0, 'max_features': 16, 'min_samples_split': 6} 0.6838823911295954 18.952 1
{'criterion': 'entropy', 'max_depth': 29.0, 'max_features': 13, 'min_samples_split': 4} 0.6936496038475042 18.467 2
{'criterion': 'entropy', 'max_depth': 20.0, 'max_features': 6, 'min_samples_split': 4} 0.6912024571500703 14.505 3
{'criterion': 'gini', 'max_depth': 32.0, 'max_features': 5, 'min_samples_split': 3} 0.6893947984204619 12.652 4
{'criterion': 'gini', 'max_depth': 24.0, 'max_features': 6, 'min_samples_split': 2} 0.6899510406043099 14.627 5
{'criterion': 'entropy', 'max_depth': 29.0, 'max_features': 9, 'min_samples_split': 6} 0.6928022946125325 20.951 6
{'criterion': 'gini', 'max_depth': 31.0, 'max_features': 9, 'min_samples_split': 3} 0.6910147378351368 19.288 7
{'criterion': 'gini', 'max_depth': 28.0, 'max_features': 7, 'min_samples_split': 2} 0.6918778120243164 16.435 8
{'criterion': 'entropy', 'max_depth': 32.0, 'max_features': 9, 'min_samples_split': 8} 0.688732867102963 18.739 9
{'criterion': 'gini', 'max_depth': 23.0, 'max_features': 12, 'min_samples_split': 2} 0.690491354887647 15.243 10
{'criterion': 'entropy', 'max_depth': 32.0, 'max_features': 11, 'min_samples_split': 5} 0.6957719438700803 16.739 11
{'criterion': 'entropy', 'max_depth': 30.0, 'max_features': 9, 'min_samples_split': 9} 0.6863490179307066 15.766 12
{'criterion': 'gini', 'max_depth': 26.0, 'max_features': 9, 'min_samples_split': 5} 0.6839057256556476 14.682 13
{'criterion': 'entropy', 'max_depth': 32.0, 'max_features': 12, 'min_samples_split': 4} 0.6951327129964894 18.031 14
{'criterion': 'gini', 'max_depth': 26.0, 'max_features': 9, 'min_samples_split': 2} 0.6912623877924392 13.39 15
{'criterion': 'entropy', 'max_depth': 24.0, 'max_features': 16, 'min_samples_split': 4} 0.6911555500792397 18.095 16
{'criterion': 'gini', 'max_depth': 25.0, 'max_features': 19, 'min_samples_split': 6} 0.691319250520437 21.042 17
{'criterion': 'entropy', 'max_depth': 30.0, 'max_features': 5, 'min_samples_split': 2} 0.6932637029791313 17.78 18
{'criterion': 'entropy', 'max_depth': 18.0, 'max_features': 3, 'min_samples_split': 9} 0.6813813756446596 17.994 19
{'criterion': 'entropy', 'max_depth': 29.0, 'max_features': 12, 'min_samples_split': 8} 0.6905132076847235 21.092 20
{'criterion': 'entropy', 'max_depth': 26.0, 'max_features': 12, 'min_samples_split': 7} 0.6922000969803984 19.078 21
{'criterion': 'entropy', 'max_depth': 23.0, 'max_features': 11, 'min_samples_split': 4} 0.6985094029104743 18.59 22
{'criterion': 'entropy', 'max_depth': 30.0, 'max_features': 2, 'min_samples_split': 5} 0.688680060771191 13.692 23
{'criterion': 'gini', 'max_depth': 20.0, 'max_features': 14, 'min_samples_split': 7} 0.6729053258232088 19.847 24
{'criterion': 'entropy', 'max_depth': 26.0, 'max_features': 14, 'min_samples_split': 9} 0.690791475801948 19.001 25
{'criterion': 'gini', 'max_depth': 28.0, 'max_features': 9, 'min_samples_split': 3} 0.689423557315838 14.935 26
{'criterion': 'gini', 'max_depth': 31.0, 'max_features': 14, 'min_samples_split': 2} 0.6929113635081885 17.307 27
{'criterion': 'entropy', 'max_depth': 26.0, 'max_features': 12, 'min_samples_split': 9} 0.6906661681031472 18.024 28
{'criterion': 'entropy', 'max_depth': 32.0, 'max_features': 14, 'min_samples_split': 3} 0.6954950672962711 17.919 29
{'criterion': 'gini', 'max_depth': 20.0, 'max_features': 9, 'min_samples_split': 4} 0.6752863725180661 14.823 30
{'criterion': 'entropy', 'max_depth': 22.0, 'max_features': 19, 'min_samples_split': 5} 0.6939793078044088 23.807 31
{'criterion': 'entropy', 'max_depth': 31.0, 'max_features': 13, 'min_samples_split': 6} 0.6960335610638793 16.733 32
{'criterion': 'gini', 'max_depth': 29.0, 'max_features': 17, 'min_samples_split': 5} 0.68859082912847 16.22 33
{'criterion': 'entropy', 'max_depth': 22.0, 'max_features': 8, 'min_samples_split': 4} 0.6939052438445559 16.382 34
{'criterion': 'entropy', 'max_depth': 18.0, 'max_features': 11, 'min_samples_split': 7} 0.6907153997235231 17.37 35
{'criterion': 'entropy', 'max_depth': 20.0, 'max_features': 11, 'min_samples_split': 5} 0.6957428893615905 17.498 36
{'criterion': 'gini', 'max_depth': 32.0, 'max_features': 14, 'min_samples_split': 4} 0.6879947391126389 16.527 37
{'criterion': 'entropy', 'max_depth': 31.0, 'max_features': 10, 'min_samples_split': 8} 0.6890183156304949 15.672 38
{'criterion': 'gini', 'max_depth': 27.0, 'max_features': 13, 'min_samples_split': 3} 0.6860991474660654 14.942 39
{'criterion': 'entropy', 'max_depth': 26.0, 'max_features': 14, 'min_samples_split': 8} 0.6924603073747808 18.62 40
{'criterion': 'gini', 'max_depth': 31.0, 'max_features': 19, 'min_samples_split': 9} 0.6916950973740211 17.531 41
{'criterion': 'entropy', 'max_depth': 29.0, 'max_features': 6, 'min_samples_split': 8} 0.6912850163871483 14.64 42
{'criterion': 'entropy', 'max_depth': 32.0, 'max_features': 13, 'min_samples_split': 9} 0.6915718682190396 17.658 43
{'criterion': 'entropy', 'max_depth': 27.0, 'max_features': 9, 'min_samples_split': 9} 0.6837315616684481 15.4 44
{'criterion': 'entropy', 'max_depth': 24.0, 'max_features': 13, 'min_samples_split': 9} 0.6847142639138403 18.055 45
{'criterion': 'entropy', 'max_depth': 25.0, 'max_features': 13, 'min_samples_split': 5} 0.6948649111091717 19.62 46
{'criterion': 'entropy', 'max_depth': 17.0, 'max_features': 8, 'min_samples_split': 6} 0.6872834908850775 16.565 47
{'criterion': 'entropy', 'max_depth': 25.0, 'max_features': 4, 'min_samples_split': 3} 0.6896827700556419 15.197 48
{'criterion': 'entropy', 'max_depth': 21.0, 'max_features': 7, 'min_samples_split': 8} 0.6883226495321716 15.687 49
{'criterion': 'gini', 'max_depth': 29.0, 'max_features': 14, 'min_samples_split': 3} 0.6941825750005505 16.143 50
{'criterion': 'gini', 'max_depth': 28.0, 'max_features': 2, 'min_samples_split': 3} 0.6871883610876159 12.064 51
{'criterion': 'entropy', 'max_depth': 32.0, 'max_features': 17, 'min_samples_split': 3} 0.6910708862009356 18.878 52
{'criterion': 'gini', 'max_depth': 27.0, 'max_features': 8, 'min_samples_split': 5} 0.6890964566116566 13.411 53
{'criterion': 'entropy', 'max_depth': 24.0, 'max_features': 17, 'min_samples_split': 2} 0.697008296565034 19.799 54
{'criterion': 'entropy', 'max_depth': 17.0, 'max_features': 13, 'min_samples_split': 5} 0.6876337004224824 17.405 55
{'criterion': 'entropy', 'max_depth': 29.0, 'max_features': 12, 'min_samples_split': 9} 0.6880851815352907 16.976 56
{'criterion': 'gini', 'max_depth': 30.0, 'max_features': 11, 'min_samples_split': 3} 0.6874227213264116 15.943 57
{'criterion': 'gini', 'max_depth': 29.0, 'max_features': 12, 'min_samples_split': 2} 0.6895681016627089 15.402 58
{'criterion': 'entropy', 'max_depth': 31.0, 'max_features': 13, 'min_samples_split': 2} 0.7011795266098307 20.808 59
{'criterion': 'entropy', 'max_depth': 15.0, 'max_features': 13, 'min_samples_split': 2} 0.68570676479959 19.183 60
{'criterion': 'entropy', 'max_depth': 22.0, 'max_features': 8, 'min_samples_split': 5} 0.6920184736238427 19.815 61
###Markdown
IoTDevID
###Code
df=pd.read_csv("UNSW_train_IoTDevID.csv")
df
df.columns
features= ['pck_size', 'Ether_type', 'LLC_ctrl', 'EAPOL_version', 'EAPOL_type', 'IP_ihl', 'IP_tos', 'IP_len', 'IP_flags', 'IP_DF', 'IP_ttl', 'IP_options', 'ICMP_code', 'TCP_dataofs', 'TCP_FIN', 'TCP_ACK', 'TCP_window', 'UDP_len', 'DHCP_options', 'BOOTP_hlen', 'BOOTP_flags', 'BOOTP_sname', 'BOOTP_file', 'BOOTP_options', 'DNS_qr', 'DNS_rd', 'DNS_qdcount', 'dport_class', 'payload_bytes', 'entropy',
'Label']
df=pd.read_csv("UNSW_train_IoTDevID.csv",usecols=features)
X_train = df.iloc[:,0:-1]
df['Label'] = df['Label'].astype('category')
y_train=df['Label'].cat.codes
df=pd.read_csv("UNSW_test_IoTDevID.csv",usecols=features)
X_test = df.iloc[:,0:-1]
df['Label'] = df['Label'].astype('category')
y_test=df['Label'].cat.codes
print(X_train.shape,X_test.shape)
X= np.concatenate([X_train, X_test])
test_fold = [-1 for _ in range(X_train.shape[0])] + [0 for _ in range(X_test.shape[0])]
y = np.concatenate([y_train, y_test])
ps = PredefinedSplit(test_fold)
def run_random_search(model, params, x_train, y_train):
#grid = GridSearchCV(model, params, cv = ps, n_jobs = -1, scoring = score, verbose = 0, refit = False)
grid =RandomizedSearchCV(model, param_grid, cv=ps,scoring = 'f1_macro')
grid.fit(x_train, y_train)
return (grid.best_params_, round(grid.best_score_,8),grid.best_estimator_)
###Output
_____no_output_____
###Markdown
RandomizedSearchCV DT
###Code
print ('%-90s %-20s %-8s %-8s' % ("HYPERPARAMETERS","F1 Score", "Time", "No"))
nfolds=10
param_grid = { 'criterion':['gini','entropy'],
"max_depth":np.linspace(1, 32, 32, endpoint=True),
"min_samples_split": sp_randint(2,10),#uniform(0.1,1 ),
# "min_samples_leafs" : np.linspace(0.1, 0.5, 5, endpoint=True)
"max_features" : sp_randint(1,X_train.shape[1])}
second=time()
f1=[]
clf=DecisionTreeClassifier()
for ii in range(25):
clf.fit(X_train, y_train)
predict =clf.predict(X_test)
f1.append(sklearn.metrics.f1_score(y_test, predict,average= "macro") )
f1=sum(f1)/len(f1)
#if f1>0.76:
print('%-90s %-20s %-8s %-8s' % ("default",f1,round(time()-second,3),ii))
for i in range(100):
second=time()
a,b,clf=run_random_search(DecisionTreeClassifier(),param_grid,X,y)
f1=[]
for ii in range(25):
clf.fit(X_train, y_train)
predict =clf.predict(X_test)
f1.append(sklearn.metrics.f1_score(y_test, predict,average= "macro") )
f1=sum(f1)/len(f1)
#if f1>0.76:
print('%-90s %-20s %-8s %-8s' % (a,f1,round(time()-second,3),i))
###Output
HYPERPARAMETERS F1 Score Time No
default 0.8195978420312032 21.566 24
{'criterion': 'entropy', 'max_depth': 30.0, 'max_features': 5, 'min_samples_split': 7} 0.8323455036302395 14.317 0
{'criterion': 'gini', 'max_depth': 28.0, 'max_features': 9, 'min_samples_split': 4} 0.8280182896840249 16.834 1
{'criterion': 'gini', 'max_depth': 22.0, 'max_features': 22, 'min_samples_split': 7} 0.8234451217809853 21.915 2
{'criterion': 'entropy', 'max_depth': 27.0, 'max_features': 17, 'min_samples_split': 8} 0.8401239596610508 22.14 3
{'criterion': 'entropy', 'max_depth': 30.0, 'max_features': 17, 'min_samples_split': 3} 0.8389146492593934 21.593 4
{'criterion': 'entropy', 'max_depth': 26.0, 'max_features': 24, 'min_samples_split': 3} 0.8298105842136465 24.507 5
{'criterion': 'entropy', 'max_depth': 23.0, 'max_features': 29, 'min_samples_split': 5} 0.8266814904437577 27.172 6
{'criterion': 'entropy', 'max_depth': 28.0, 'max_features': 6, 'min_samples_split': 4} 0.8346399252015262 13.768 7
{'criterion': 'gini', 'max_depth': 24.0, 'max_features': 19, 'min_samples_split': 4} 0.8353015991591904 22.487 8
{'criterion': 'entropy', 'max_depth': 24.0, 'max_features': 5, 'min_samples_split': 2} 0.8356332249673563 13.116 9
{'criterion': 'entropy', 'max_depth': 22.0, 'max_features': 18, 'min_samples_split': 3} 0.8367648201378618 20.709 10
{'criterion': 'entropy', 'max_depth': 21.0, 'max_features': 21, 'min_samples_split': 8} 0.8288974575127522 22.085 11
{'criterion': 'entropy', 'max_depth': 24.0, 'max_features': 10, 'min_samples_split': 5} 0.835427634371817 16.389 12
{'criterion': 'entropy', 'max_depth': 26.0, 'max_features': 24, 'min_samples_split': 6} 0.8283315455395912 24.892 13
{'criterion': 'entropy', 'max_depth': 24.0, 'max_features': 9, 'min_samples_split': 7} 0.8326494183633478 16.294 14
{'criterion': 'entropy', 'max_depth': 17.0, 'max_features': 28, 'min_samples_split': 5} 0.8281997425077925 27.357 15
{'criterion': 'entropy', 'max_depth': 28.0, 'max_features': 4, 'min_samples_split': 3} 0.8361018737158247 14.633 16
{'criterion': 'gini', 'max_depth': 29.0, 'max_features': 5, 'min_samples_split': 4} 0.8296858067121526 14.078 17
{'criterion': 'gini', 'max_depth': 32.0, 'max_features': 13, 'min_samples_split': 3} 0.8317979710762653 19.518 18
{'criterion': 'gini', 'max_depth': 32.0, 'max_features': 11, 'min_samples_split': 6} 0.827274386170492 18.25 19
{'criterion': 'entropy', 'max_depth': 31.0, 'max_features': 21, 'min_samples_split': 3} 0.8303547862220337 22.429 20
{'criterion': 'entropy', 'max_depth': 32.0, 'max_features': 15, 'min_samples_split': 3} 0.8363697294745656 22.048 21
{'criterion': 'entropy', 'max_depth': 16.0, 'max_features': 16, 'min_samples_split': 7} 0.8227403753292174 22.993 22
{'criterion': 'gini', 'max_depth': 26.0, 'max_features': 13, 'min_samples_split': 6} 0.8351418271651093 17.502 23
{'criterion': 'gini', 'max_depth': 32.0, 'max_features': 10, 'min_samples_split': 2} 0.8292249495394688 15.769 24
{'criterion': 'entropy', 'max_depth': 24.0, 'max_features': 20, 'min_samples_split': 5} 0.8411410044532194 25.58 25
{'criterion': 'entropy', 'max_depth': 23.0, 'max_features': 19, 'min_samples_split': 3} 0.8338041860639126 21.014 26
{'criterion': 'gini', 'max_depth': 30.0, 'max_features': 29, 'min_samples_split': 2} 0.8221013484287332 28.225 27
{'criterion': 'gini', 'max_depth': 28.0, 'max_features': 21, 'min_samples_split': 7} 0.8286730688563122 21.69 28
{'criterion': 'entropy', 'max_depth': 25.0, 'max_features': 21, 'min_samples_split': 2} 0.8376517597728984 25.465 29
{'criterion': 'entropy', 'max_depth': 22.0, 'max_features': 19, 'min_samples_split': 5} 0.8267613648787466 21.379 30
{'criterion': 'entropy', 'max_depth': 28.0, 'max_features': 16, 'min_samples_split': 8} 0.8378037767957747 21.963 31
{'criterion': 'entropy', 'max_depth': 31.0, 'max_features': 12, 'min_samples_split': 4} 0.8341055809299021 22.311 32
{'criterion': 'entropy', 'max_depth': 25.0, 'max_features': 23, 'min_samples_split': 7} 0.8325632711164136 22.559 33
{'criterion': 'entropy', 'max_depth': 32.0, 'max_features': 17, 'min_samples_split': 9} 0.8359108124141309 19.842 34
{'criterion': 'entropy', 'max_depth': 31.0, 'max_features': 4, 'min_samples_split': 6} 0.833650714187229 15.562 35
{'criterion': 'entropy', 'max_depth': 21.0, 'max_features': 5, 'min_samples_split': 6} 0.8269793763911965 16.518 36
{'criterion': 'entropy', 'max_depth': 16.0, 'max_features': 22, 'min_samples_split': 4} 0.826203696033005 24.615 37
{'criterion': 'gini', 'max_depth': 30.0, 'max_features': 26, 'min_samples_split': 4} 0.8256713663177898 25.719 38
{'criterion': 'entropy', 'max_depth': 28.0, 'max_features': 15, 'min_samples_split': 2} 0.8373348279853662 20.841 39
{'criterion': 'gini', 'max_depth': 30.0, 'max_features': 17, 'min_samples_split': 3} 0.8352639855001773 20.756 40
{'criterion': 'entropy', 'max_depth': 16.0, 'max_features': 26, 'min_samples_split': 7} 0.8262450735090955 24.209 41
{'criterion': 'gini', 'max_depth': 31.0, 'max_features': 22, 'min_samples_split': 9} 0.8223447037776687 23.171 42
{'criterion': 'entropy', 'max_depth': 18.0, 'max_features': 26, 'min_samples_split': 4} 0.8330573117260226 26.9 43
{'criterion': 'gini', 'max_depth': 29.0, 'max_features': 5, 'min_samples_split': 9} 0.8267224483547463 14.151 44
{'criterion': 'gini', 'max_depth': 30.0, 'max_features': 20, 'min_samples_split': 5} 0.8310907364605014 28.152 45
{'criterion': 'gini', 'max_depth': 24.0, 'max_features': 22, 'min_samples_split': 5} 0.8262053912756336 22.675 46
{'criterion': 'entropy', 'max_depth': 31.0, 'max_features': 11, 'min_samples_split': 4} 0.8371924331341448 19.724 47
{'criterion': 'gini', 'max_depth': 26.0, 'max_features': 22, 'min_samples_split': 2} 0.8282200594539677 27.26 48
{'criterion': 'entropy', 'max_depth': 22.0, 'max_features': 10, 'min_samples_split': 6} 0.8311058103028012 18.823 49
{'criterion': 'entropy', 'max_depth': 26.0, 'max_features': 1, 'min_samples_split': 5} 0.8268240505977196 14.623 50
{'criterion': 'entropy', 'max_depth': 31.0, 'max_features': 4, 'min_samples_split': 5} 0.8311645150991157 15.399 51
{'criterion': 'entropy', 'max_depth': 22.0, 'max_features': 19, 'min_samples_split': 7} 0.8348119469335796 20.102 52
{'criterion': 'gini', 'max_depth': 20.0, 'max_features': 16, 'min_samples_split': 9} 0.8192823410531247 19.806 53
{'criterion': 'entropy', 'max_depth': 27.0, 'max_features': 14, 'min_samples_split': 8} 0.834122383357036 17.543 54
{'criterion': 'entropy', 'max_depth': 24.0, 'max_features': 1, 'min_samples_split': 5} 0.8217907553009752 13.364 55
{'criterion': 'gini', 'max_depth': 26.0, 'max_features': 16, 'min_samples_split': 6} 0.8298804542236533 18.353 56
{'criterion': 'gini', 'max_depth': 31.0, 'max_features': 10, 'min_samples_split': 6} 0.8288904647120535 14.997 57
{'criterion': 'entropy', 'max_depth': 27.0, 'max_features': 13, 'min_samples_split': 3} 0.8388303013983308 19.271 58
{'criterion': 'gini', 'max_depth': 28.0, 'max_features': 27, 'min_samples_split': 2} 0.8217666179953043 25.057 59
{'criterion': 'entropy', 'max_depth': 23.0, 'max_features': 19, 'min_samples_split': 8} 0.8362543423910034 21.237 60
{'criterion': 'gini', 'max_depth': 19.0, 'max_features': 10, 'min_samples_split': 5} 0.8110013361357734 13.993 61
|
Tutorials/Boston Housing - XGBoost (Batch Transform) - Low Level.ipynb | ###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
INFO:sagemaker:Created S3 bucket: sagemaker-us-east-1-440180731255
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacst stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
_____no_output_____
###Markdown
Execute the training jobNow that we've built the dict containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2018-10-11 05:00:50 Starting - Launching requested ML instances.........
Preparing the instances for training......
2018-10-11 05:02:59 Downloading - Downloading input data
2018-10-11 05:03:07 Training - Downloading the training image..
[31mArguments: train[0m
[31m[2018-10-11:05:03:35:INFO] Running standalone xgboost training.[0m
[31m[2018-10-11:05:03:35:INFO] File size need to be processed in the node: 0.03mb. Available memory size in the node: 8584.64mb[0m
[31m[2018-10-11:05:03:35:INFO] Determined delimiter of CSV input is ','[0m
[31m[05:03:35] S3DistributionType set as FullyReplicated[0m
[31m[05:03:35] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[31m[2018-10-11:05:03:35:INFO] Determined delimiter of CSV input is ','[0m
[31m[05:03:35] S3DistributionType set as FullyReplicated[0m
[31m[05:03:35] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 0 pruned nodes, max_depth=3[0m
[31m[0]#011train-rmse:19.6255#011validation-rmse:20.3723[0m
[31mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[31mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=3[0m
[31m[1]#011train-rmse:15.9806#011validation-rmse:17[0m
[31m[2]#011train-rmse:13.0901#011validation-rmse:14.3797[0m
[31m[3]#011train-rmse:10.7437#011validation-rmse:12.427[0m
[31m[4]#011train-rmse:8.82626#011validation-rmse:10.6373[0m
[31m[5]#011train-rmse:7.26402#011validation-rmse:9.27117[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=4[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[6]#011train-rmse:6.10398#011validation-rmse:8.40006[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[7]#011train-rmse:5.16357#011validation-rmse:7.61637[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[8]#011train-rmse:4.43563#011validation-rmse:7.02392[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[9]#011train-rmse:3.85331#011validation-rmse:6.61666[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[10]#011train-rmse:3.39415#011validation-rmse:6.27109[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[11]#011train-rmse:3.04076#011validation-rmse:6.09621[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[12]#011train-rmse:2.75692#011validation-rmse:5.90831[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[13]#011train-rmse:2.52399#011validation-rmse:5.74742[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[14]#011train-rmse:2.3549#011validation-rmse:5.67402[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[15]#011train-rmse:2.19168#011validation-rmse:5.62773[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[16]#011train-rmse:2.10692#011validation-rmse:5.57276[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[17]#011train-rmse:2.04891#011validation-rmse:5.49547[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[18]#011train-rmse:1.94392#011validation-rmse:5.43291[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[19]#011train-rmse:1.85915#011validation-rmse:5.38573[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[20]#011train-rmse:1.78#011validation-rmse:5.28107[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[21]#011train-rmse:1.73377#011validation-rmse:5.21082[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5[0m
[31m[22]#011train-rmse:1.68226#011validation-rmse:5.18447[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[23]#011train-rmse:1.65138#011validation-rmse:5.1788[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[24]#011train-rmse:1.60893#011validation-rmse:5.18785[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[25]#011train-rmse:1.57348#011validation-rmse:5.19231[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[26]#011train-rmse:1.54866#011validation-rmse:5.20387[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[27]#011train-rmse:1.5359#011validation-rmse:5.2061[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[28]#011train-rmse:1.48284#011validation-rmse:5.2047[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[29]#011train-rmse:1.44001#011validation-rmse:5.22442[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[30]#011train-rmse:1.42455#011validation-rmse:5.19335[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5[0m
[31m[31]#011train-rmse:1.39128#011validation-rmse:5.16391[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5[0m
[31m[32]#011train-rmse:1.37067#011validation-rmse:5.15462[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[33]#011train-rmse:1.35998#011validation-rmse:5.15038[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[34]#011train-rmse:1.34122#011validation-rmse:5.15624[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 2 pruned nodes, max_depth=4[0m
[31m[35]#011train-rmse:1.32784#011validation-rmse:5.12332[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[36]#011train-rmse:1.30025#011validation-rmse:5.11416[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 4 pruned nodes, max_depth=3[0m
[31m[37]#011train-rmse:1.29075#011validation-rmse:5.10241[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[38]#011train-rmse:1.27334#011validation-rmse:5.11076[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5[0m
[31m[39]#011train-rmse:1.26374#011validation-rmse:5.08404[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[40]#011train-rmse:1.23758#011validation-rmse:5.07725[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[31m[41]#011train-rmse:1.21792#011validation-rmse:5.05967[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 4 pruned nodes, max_depth=3[0m
[31m[42]#011train-rmse:1.20485#011validation-rmse:5.08134[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5[0m
[31m[43]#011train-rmse:1.17556#011validation-rmse:5.08723[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[44]#011train-rmse:1.15115#011validation-rmse:5.10521[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 6 pruned nodes, max_depth=3[0m
[31m[45]#011train-rmse:1.13803#011validation-rmse:5.08627[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[31m[46]#011train-rmse:1.12508#011validation-rmse:5.05624[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 28 pruned nodes, max_depth=3[0m
[31m[47]#011train-rmse:1.11249#011validation-rmse:5.06889[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[48]#011train-rmse:1.0964#011validation-rmse:5.08904[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 10 pruned nodes, max_depth=2[0m
[31m[49]#011train-rmse:1.09154#011validation-rmse:5.09736[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 2 pruned nodes, max_depth=4[0m
[31m[50]#011train-rmse:1.08763#011validation-rmse:5.10814[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[51]#011train-rmse:1.07454#011validation-rmse:5.10932[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5[0m
[31m[52]#011train-rmse:1.05423#011validation-rmse:5.09225[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5[0m
[31m[53]#011train-rmse:1.04396#011validation-rmse:5.0971[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 14 pruned nodes, max_depth=1[0m
[31m[54]#011train-rmse:1.04523#011validation-rmse:5.11451[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 12 pruned nodes, max_depth=0[0m
[31m[55]#011train-rmse:1.04524#011validation-rmse:5.11538[0m
[31m[05:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 14 pruned nodes, max_depth=4[0m
[31m[56]#011train-rmse:1.03594#011validation-rmse:5.11774[0m
[31mStopping. Best iteration:[0m
[31m[46]#011train-rmse:1.12508#011validation-rmse:5.05624
[0m
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to as SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
......................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
Completed 2.3 KiB/2.3 KiB (15.9 KiB/s) with 1 file(s) remaining
download: s3://sagemaker-us-east-1-440180731255/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail.
###Code
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
###Output
Collecting sagemaker==1.72.0
Downloading sagemaker-1.72.0.tar.gz (297 kB)
[K |████████████████████████████████| 297 kB 19.7 MB/s eta 0:00:01
[?25hRequirement already satisfied: boto3>=1.14.12 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.17.98)
Requirement already satisfied: numpy>=1.9.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.19.5)
Requirement already satisfied: protobuf>=3.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.15.2)
Requirement already satisfied: scipy>=0.19.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.5.3)
Requirement already satisfied: protobuf3-to-dict>=0.1.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.5)
Collecting smdebug-rulesconfig==0.1.4
Downloading smdebug_rulesconfig-0.1.4-py2.py3-none-any.whl (10 kB)
Requirement already satisfied: importlib-metadata>=1.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.7.0)
Requirement already satisfied: packaging>=20.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (20.9)
Requirement already satisfied: botocore<1.21.0,>=1.20.98 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (1.20.98)
Requirement already satisfied: s3transfer<0.5.0,>=0.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.4.2)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.10.0)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.21.0,>=1.20.98->boto3>=1.14.12->sagemaker==1.72.0) (2.8.1)
Requirement already satisfied: urllib3<1.27,>=1.25.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.21.0,>=1.20.98->boto3>=1.14.12->sagemaker==1.72.0) (1.26.5)
Requirement already satisfied: typing-extensions>=3.6.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.7.4.3)
Requirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.4.0)
Requirement already satisfied: pyparsing>=2.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging>=20.0->sagemaker==1.72.0) (2.4.7)
Requirement already satisfied: six>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (1.15.0)
Building wheels for collected packages: sagemaker
Building wheel for sagemaker (setup.py) ... [?25ldone
[?25h Created wheel for sagemaker: filename=sagemaker-1.72.0-py2.py3-none-any.whl size=386358 sha256=66018ddc11f6b2e53db14178fab194037574d96b8b1754cc4dee3c819e1500a1
Stored in directory: /home/ec2-user/.cache/pip/wheels/c3/58/70/85faf4437568bfaa4c419937569ba1fe54d44c5db42406bbd7
Successfully built sagemaker
Installing collected packages: smdebug-rulesconfig, sagemaker
Attempting uninstall: smdebug-rulesconfig
Found existing installation: smdebug-rulesconfig 1.0.1
Uninstalling smdebug-rulesconfig-1.0.1:
Successfully uninstalled smdebug-rulesconfig-1.0.1
Attempting uninstall: sagemaker
Found existing installation: sagemaker 2.45.0
Uninstalling sagemaker-2.45.0:
Successfully uninstalled sagemaker-2.45.0
Successfully installed sagemaker-1.72.0 smdebug-rulesconfig-0.1.4
[33mWARNING: You are using pip version 21.1.2; however, version 21.1.3 is available.
You should consider upgrading via the '/home/ec2-user/anaconda3/envs/pytorch_p36/bin/python -m pip install --upgrade pip' command.[0m
###Markdown
Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
'get_image_uri' method will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='1.0-1'. For example:
get_image_uri(region, 'xgboost', '1.0-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2021-06-30 11:37:23 Starting - Starting the training job...
2021-06-30 11:37:27 Starting - Launching requested ML instances......
2021-06-30 11:38:55 Starting - Preparing the instances for training.........
2021-06-30 11:40:07 Downloading - Downloading input data...
2021-06-30 11:40:53 Training - Training image download completed. Training in progress..[34mArguments: train[0m
[34m[2021-06-30:11:40:53:INFO] Running standalone xgboost training.[0m
[34m[2021-06-30:11:40:53:INFO] File size need to be processed in the node: 0.03mb. Available memory size in the node: 8414.87mb[0m
[34m[2021-06-30:11:40:53:INFO] Determined delimiter of CSV input is ','[0m
[34m[11:40:53] S3DistributionType set as FullyReplicated[0m
[34m[11:40:53] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2021-06-30:11:40:53:INFO] Determined delimiter of CSV input is ','[0m
[34m[11:40:53] S3DistributionType set as FullyReplicated[0m
[34m[11:40:53] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[0]#011train-rmse:20.0455#011validation-rmse:19.1471[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[1]#011train-rmse:16.4083#011validation-rmse:15.5607[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[2]#011train-rmse:13.4625#011validation-rmse:12.7217[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[3]#011train-rmse:11.1752#011validation-rmse:10.5243[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[4]#011train-rmse:9.21224#011validation-rmse:8.81655[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[5]#011train-rmse:7.7486#011validation-rmse:7.40066[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[6]#011train-rmse:6.60457#011validation-rmse:6.28638[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[7]#011train-rmse:5.57188#011validation-rmse:5.40527[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[8]#011train-rmse:4.79998#011validation-rmse:4.88118[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[9]#011train-rmse:4.16098#011validation-rmse:4.45257[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[10]#011train-rmse:3.65806#011validation-rmse:4.15943[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[11]#011train-rmse:3.23908#011validation-rmse:3.93055[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[12]#011train-rmse:2.89453#011validation-rmse:3.78524[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[13]#011train-rmse:2.65122#011validation-rmse:3.6097[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[14]#011train-rmse:2.44983#011validation-rmse:3.52992[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[15]#011train-rmse:2.29414#011validation-rmse:3.42587[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[16]#011train-rmse:2.14874#011validation-rmse:3.34927[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[17]#011train-rmse:2.02736#011validation-rmse:3.28692[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[18]#011train-rmse:1.93653#011validation-rmse:3.25042[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[19]#011train-rmse:1.87055#011validation-rmse:3.24323[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[20]#011train-rmse:1.81157#011validation-rmse:3.22785[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[21]#011train-rmse:1.75346#011validation-rmse:3.21783[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[22]#011train-rmse:1.67489#011validation-rmse:3.18978[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[23]#011train-rmse:1.6246#011validation-rmse:3.19896[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[24]#011train-rmse:1.59585#011validation-rmse:3.19926[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[25]#011train-rmse:1.56014#011validation-rmse:3.17775[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[26]#011train-rmse:1.52449#011validation-rmse:3.18747[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[27]#011train-rmse:1.47026#011validation-rmse:3.1788[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[28]#011train-rmse:1.39102#011validation-rmse:3.18309[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[29]#011train-rmse:1.36814#011validation-rmse:3.16683[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[30]#011train-rmse:1.34703#011validation-rmse:3.17415[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[31]#011train-rmse:1.29517#011validation-rmse:3.14671[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[32]#011train-rmse:1.2646#011validation-rmse:3.16526[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[33]#011train-rmse:1.21977#011validation-rmse:3.16876[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[34]#011train-rmse:1.18149#011validation-rmse:3.18001[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 12 pruned nodes, max_depth=4[0m
[34m[35]#011train-rmse:1.1557#011validation-rmse:3.18672[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 18 pruned nodes, max_depth=5[0m
[34m[36]#011train-rmse:1.13532#011validation-rmse:3.19159[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[37]#011train-rmse:1.08596#011validation-rmse:3.1949[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=4[0m
[34m[38]#011train-rmse:1.06459#011validation-rmse:3.19661[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[39]#011train-rmse:1.03739#011validation-rmse:3.20179[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[40]#011train-rmse:1.01287#011validation-rmse:3.19291[0m
[34m[11:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[41]#011train-rmse:0.994812#011validation-rmse:3.20625[0m
[34mStopping. Best iteration:[0m
[34m[31]#011train-rmse:1.29517#011validation-rmse:3.14671
[0m
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
.........................................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
Completed 3.0 KiB/3.0 KiB (31.6 KiB/s) with 1 file(s) remaining
download: s3://sagemaker-us-east-1-608850729155/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
WARNING:root:There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='0.90-1'. For example:
get_image_uri(region, 'xgboost', '0.90-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2020-07-02 23:52:42 Starting - Launching requested ML instances...
2020-07-02 23:53:50 Starting - Preparing the instances for training......
2020-07-02 23:54:39 Downloading - Downloading input data...
2020-07-02 23:55:21 Training - Training image download completed. Training in progress.
2020-07-02 23:55:21 Uploading - Uploading generated training model.[34mArguments: train[0m
[34m[2020-07-02:23:55:16:INFO] Running standalone xgboost training.[0m
[34m[2020-07-02:23:55:16:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8480.74mb[0m
[34m[2020-07-02:23:55:16:INFO] Determined delimiter of CSV input is ','[0m
[34m[23:55:16] S3DistributionType set as FullyReplicated[0m
[34m[23:55:16] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2020-07-02:23:55:16:INFO] Determined delimiter of CSV input is ','[0m
[34m[23:55:16] S3DistributionType set as FullyReplicated[0m
[34m[23:55:16] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[0]#011train-rmse:19.5349#011validation-rmse:20.0095[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[1]#011train-rmse:16.0295#011validation-rmse:16.4723[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[2]#011train-rmse:13.1826#011validation-rmse:13.5878[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=4[0m
[34m[3]#011train-rmse:10.9007#011validation-rmse:11.3021[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=4[0m
[34m[4]#011train-rmse:9.10543#011validation-rmse:9.54216[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[5]#011train-rmse:7.67648#011validation-rmse:8.00256[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[6]#011train-rmse:6.49668#011validation-rmse:6.76869[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[7]#011train-rmse:5.54486#011validation-rmse:5.70259[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[8]#011train-rmse:4.82021#011validation-rmse:5.03261[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[9]#011train-rmse:4.24024#011validation-rmse:4.50189[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[10]#011train-rmse:3.73283#011validation-rmse:4.00402[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[11]#011train-rmse:3.40501#011validation-rmse:3.69053[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[12]#011train-rmse:3.11796#011validation-rmse:3.43768[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[13]#011train-rmse:2.9235#011validation-rmse:3.2496[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[14]#011train-rmse:2.6865#011validation-rmse:3.11507[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[15]#011train-rmse:2.54203#011validation-rmse:3.02304[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[16]#011train-rmse:2.42834#011validation-rmse:2.94678[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[17]#011train-rmse:2.32616#011validation-rmse:2.86833[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[18]#011train-rmse:2.2495#011validation-rmse:2.88175[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[19]#011train-rmse:2.16987#011validation-rmse:2.82182[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[20]#011train-rmse:2.0791#011validation-rmse:2.75277[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[21]#011train-rmse:1.97759#011validation-rmse:2.76833[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[22]#011train-rmse:1.92475#011validation-rmse:2.76183[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[23]#011train-rmse:1.83957#011validation-rmse:2.77321[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[24]#011train-rmse:1.81939#011validation-rmse:2.7597[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[25]#011train-rmse:1.80255#011validation-rmse:2.73751[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[26]#011train-rmse:1.77656#011validation-rmse:2.73705[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[27]#011train-rmse:1.73855#011validation-rmse:2.73453[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[28]#011train-rmse:1.6715#011validation-rmse:2.72806[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[29]#011train-rmse:1.62278#011validation-rmse:2.7062[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[30]#011train-rmse:1.58266#011validation-rmse:2.69513[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[31]#011train-rmse:1.53218#011validation-rmse:2.70662[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[32]#011train-rmse:1.45112#011validation-rmse:2.71283[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[33]#011train-rmse:1.43352#011validation-rmse:2.70041[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[34]#011train-rmse:1.41027#011validation-rmse:2.70625[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[35]#011train-rmse:1.3631#011validation-rmse:2.69217[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[36]#011train-rmse:1.33892#011validation-rmse:2.67485[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 6 pruned nodes, max_depth=2[0m
[34m[37]#011train-rmse:1.33429#011validation-rmse:2.67064[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[38]#011train-rmse:1.30198#011validation-rmse:2.68521[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=4[0m
[34m[39]#011train-rmse:1.26927#011validation-rmse:2.66847[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[40]#011train-rmse:1.25382#011validation-rmse:2.6752[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[41]#011train-rmse:1.20534#011validation-rmse:2.68296[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 18 pruned nodes, max_depth=5[0m
[34m[42]#011train-rmse:1.17756#011validation-rmse:2.68655[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 12 pruned nodes, max_depth=4[0m
[34m[43]#011train-rmse:1.15384#011validation-rmse:2.70143[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 14 pruned nodes, max_depth=4[0m
[34m[44]#011train-rmse:1.14285#011validation-rmse:2.70507[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[45]#011train-rmse:1.12554#011validation-rmse:2.71033[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 20 pruned nodes, max_depth=3[0m
[34m[46]#011train-rmse:1.11146#011validation-rmse:2.69953[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 12 pruned nodes, max_depth=0[0m
[34m[47]#011train-rmse:1.11138#011validation-rmse:2.69947[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[48]#011train-rmse:1.09107#011validation-rmse:2.69417[0m
[34m[23:55:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 14 pruned nodes, max_depth=2[0m
[34m[49]#011train-rmse:1.08086#011validation-rmse:2.69133[0m
[34mStopping. Best iteration:[0m
[34m[39]#011train-rmse:1.26927#011validation-rmse:2.66847
[0m
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
............................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
_____no_output_____
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
预测波士顿房价 在 SageMaker 中使用 XGBoost(批转换)_机器学习工程师纳米学位课程 | 开发_---为了介绍 SageMaker 的低阶 Python API,我们将查看一个相对简单的问题。我们将使用[波士顿房价数据集](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html)预测波士顿地区的房价中位数。此 notebook 中使用的 API 的参考文档位于 [SageMaker 开发人员指南](https://docs.aws.amazon.com/sagemaker/latest/dg/)页面 一般步骤通常,在 notebook 实例中使用 SageMaker 时,你需要完成以下步骤。当然,并非每个项目都要完成每一步。此外,有很多步骤有很大的变化余地,你将在这些课程中发现这一点。1. 下载或检索数据。2. 处理/准备数据。3. 将处理的数据上传到 S3。4. 训练所选的模型。5. 测试训练的模型(通常使用批转换作业)。6. 部署训练的模型。7. 使用部署的模型。在此 notebook 中,我们将仅介绍第 1-5 步,因为只是大致了解如何使用 SageMaker。在后面的 notebook 中,我们将详细介绍如何部署训练的模型。 第 0 步:设置 notebook先进行必要的设置以运行 notebook。首先,加载所需的所有 Python 模块。
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
除了上面的模块之外,我们还需要导入将使用的各种 SageMaker 模块。
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
第 1 步:下载数据幸运的是,我们可以使用 sklearn 检索数据集,所以这一步相对比较简单。
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
第 2 步:准备和拆分数据因为使用的是整洁的表格数据,所以不需要进行任何处理。但是,我们需要将数据集中的各行拆分成训练集、测试集和验证集。
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
第 3 步:将数据文件上传到 S3使用 SageMaker 创建训练作业后,进行训练操作的容器会执行。此容器可以访问存储在 S3 上的数据。所以我们需要将用来训练的数据上传到 S3。此外,在执行批转换作业时,SageMaker 要求输入数据存储在 S3 上。我们可以使用 SageMaker API 完成这一步,它会在后台自动处理完一些步骤。 将数据保存到本地首先,我们需要创建测试、训练和验证 csv 文件,并将这些文件上传到 S3。
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
上传到 S3因为目前正在 SageMaker 会话中运行,所以可以使用代表此会话的对象将数据上传到默认的 S3 存储桶中。注意,建议提供自定义 prefix(即 S3 文件夹),以防意外地破坏了其他 notebook 或项目上传的数据。
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
第 4 步:训练和构建 XGBoost 模型将训练和验证数据上传到 S3 后,我们可以为 XGBoost 模型创建训练作业并构建模型本身了。 设置训练作业首先,我们将为模型设置和执行训练作业。我们需要指定一些信息,供 SageMaker 设置和正确地执行计算过程。要查看构建训练作业的其他文档,请参阅 [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) 参考文档。
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
_____no_output_____
###Markdown
执行训练作业构建了包含训练作业参数的字典对象后,我们可以要求 SageMaker 执行训练作业了。
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
SageMaker 已经创建了训练作业,并且训练作业现在正在运行中。因为我们需要获得训练作业的输出,所以需要等待运行完毕。我们可以要求 SageMaker 输出训练作业生成的日志,并继续要求输出日志,直到训练作业运行完毕。
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
_____no_output_____
###Markdown
构建模型训练作业运行完毕后,我们可以使用一些模型工件构建模型。注意,我们说的模型是 SageMaker 所定义的模型,即关于特定算法及其训练作业生成的工件的信息集合。
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
第 5 步:测试模型将模型拟合训练数据并使用验证数据避免过拟合后,我们可以测试模型了。我们将使用 SageMaker 的批转换功能。也就是说,我们需要设置和执行批转换作业,与之前构建训练作业的方式相似。 设置批转换作业就像训练模型一样,我们首先需要提供一些信息,并且所采用的数据结构描述了我们要执行的批转换作业。我们将仅使用这里可用的某些选项,如果你想了解其他选项,请参阅[创建批转换作业](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html) SageMaker 文档。
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
执行批转换作业创建了请求数据结构后,下面要求 SageMaker 设置和运行批转换作业。与之前的步骤一样,SageMaker 会在后台执行这些任务,如果你想等待转换作业运行完毕(并查看作业的进度),可以调用 wait() 方法来等待转换作业运行完毕。
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
_____no_output_____
###Markdown
分析结果现在转换作业已经运行完毕,结果按照我们的要求存储到了 S3 上。因为我们想要在 notebook 中分析输出结果,所以将使用一个 notebook 功能将输出文件从 S3 复制到本地。
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
_____no_output_____
###Markdown
为了查看模型的运行效果,我们可以绘制一个简单的预测值与真实值散点图。如果模型的预测完全准确的话,那么散点图将是一条直线 $x=y$。可以看出,我们的模型表现不错,但是还有改进的余地。
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
可选步骤:清理数据SageMaker 上的默认 notebook 实例没有太多的可用磁盘空间。当你继续完成和执行 notebook 时,最终会耗尽磁盘空间,导致难以诊断的错误。完全使用完 notebook 后,建议删除创建的文件。你可以从终端或 notebook hub 删除文件。以下单元格中包含了从 notebook 内清理文件的命令。
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
WARNING:root:There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='0.90-1'. For example:
get_image_uri(region, 'xgboost', '0.90-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2020-04-14 14:46:40 Starting - Launching requested ML instances......
2020-04-14 14:47:35 Starting - Preparing the instances for training......
2020-04-14 14:48:24 Downloading - Downloading input data...
2020-04-14 14:48:55 Training - Downloading the training image.[34mArguments: train[0m
[34m[2020-04-14:14:49:17:INFO] Running standalone xgboost training.[0m
[34m[2020-04-14:14:49:17:INFO] File size need to be processed in the node: 0.03mb. Available memory size in the node: 8494.77mb[0m
[34m[2020-04-14:14:49:17:INFO] Determined delimiter of CSV input is ','[0m
[34m[14:49:17] S3DistributionType set as FullyReplicated[0m
[34m[14:49:17] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2020-04-14:14:49:17:INFO] Determined delimiter of CSV input is ','[0m
[34m[14:49:17] S3DistributionType set as FullyReplicated[0m
[34m[14:49:17] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[0]#011train-rmse:19.7609#011validation-rmse:19.1682[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[1]#011train-rmse:16.1967#011validation-rmse:15.6966[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[2]#011train-rmse:13.3648#011validation-rmse:12.9694[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[3]#011train-rmse:11.0289#011validation-rmse:10.8856[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[4]#011train-rmse:9.1454#011validation-rmse:9.19592[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[5]#011train-rmse:7.63525#011validation-rmse:7.96811[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[6]#011train-rmse:6.39922#011validation-rmse:6.85879[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[7]#011train-rmse:5.43053#011validation-rmse:5.98722[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[8]#011train-rmse:4.69399#011validation-rmse:5.45012[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[9]#011train-rmse:4.07708#011validation-rmse:4.97611[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[10]#011train-rmse:3.64117#011validation-rmse:4.61562[0m
[34m[11]#011train-rmse:3.28091#011validation-rmse:4.33273[0m
[34m[12]#011train-rmse:2.98585#011validation-rmse:4.1501[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[13]#011train-rmse:2.72889#011validation-rmse:3.97094[0m
[34m[14]#011train-rmse:2.54252#011validation-rmse:3.88057[0m
[34m[15]#011train-rmse:2.42174#011validation-rmse:3.82436[0m
[34m[16]#011train-rmse:2.31638#011validation-rmse:3.76621[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[17]#011train-rmse:2.20377#011validation-rmse:3.75277[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[18]#011train-rmse:2.15031#011validation-rmse:3.75847[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[19]#011train-rmse:2.0685#011validation-rmse:3.72955[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[20]#011train-rmse:2.00768#011validation-rmse:3.68507[0m
[34m[21]#011train-rmse:1.89877#011validation-rmse:3.67804[0m
[34m[22]#011train-rmse:1.86018#011validation-rmse:3.64269[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[23]#011train-rmse:1.81219#011validation-rmse:3.66234[0m
[34m[24]#011train-rmse:1.7391#011validation-rmse:3.60227[0m
[34m[25]#011train-rmse:1.66666#011validation-rmse:3.59597[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[26]#011train-rmse:1.61992#011validation-rmse:3.57984[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[27]#011train-rmse:1.57184#011validation-rmse:3.55381[0m
[34m[28]#011train-rmse:1.51017#011validation-rmse:3.59905[0m
[34m[29]#011train-rmse:1.48296#011validation-rmse:3.5877[0m
[34m[30]#011train-rmse:1.46414#011validation-rmse:3.57715[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[31]#011train-rmse:1.44535#011validation-rmse:3.58324[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[32]#011train-rmse:1.40263#011validation-rmse:3.57565[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[33]#011train-rmse:1.37779#011validation-rmse:3.58435[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[34]#011train-rmse:1.35549#011validation-rmse:3.56363[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[35]#011train-rmse:1.33581#011validation-rmse:3.55864[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[36]#011train-rmse:1.28596#011validation-rmse:3.53334[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 14 pruned nodes, max_depth=1[0m
[34m[37]#011train-rmse:1.28605#011validation-rmse:3.52887[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 10 pruned nodes, max_depth=4[0m
[34m[38]#011train-rmse:1.26977#011validation-rmse:3.53011[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[39]#011train-rmse:1.25247#011validation-rmse:3.53172[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 12 pruned nodes, max_depth=1[0m
[34m[40]#011train-rmse:1.25293#011validation-rmse:3.53788[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 22 pruned nodes, max_depth=4[0m
[34m[41]#011train-rmse:1.23408#011validation-rmse:3.52964[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 14 pruned nodes, max_depth=0[0m
[34m[42]#011train-rmse:1.23402#011validation-rmse:3.52977[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[43]#011train-rmse:1.21243#011validation-rmse:3.51098[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=4[0m
[34m[44]#011train-rmse:1.19662#011validation-rmse:3.5051[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[45]#011train-rmse:1.16845#011validation-rmse:3.51165[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 8 pruned nodes, max_depth=1[0m
[34m[46]#011train-rmse:1.16875#011validation-rmse:3.51747[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[47]#011train-rmse:1.14177#011validation-rmse:3.53711[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[48]#011train-rmse:1.10918#011validation-rmse:3.55157[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[49]#011train-rmse:1.09321#011validation-rmse:3.53864[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 10 pruned nodes, max_depth=1[0m
[34m[50]#011train-rmse:1.09545#011validation-rmse:3.54497[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 16 pruned nodes, max_depth=0[0m
[34m[51]#011train-rmse:1.09554#011validation-rmse:3.54481[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 24 pruned nodes, max_depth=2[0m
[34m[52]#011train-rmse:1.08445#011validation-rmse:3.55104[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 18 pruned nodes, max_depth=2[0m
[34m[53]#011train-rmse:1.07551#011validation-rmse:3.54359[0m
[34m[14:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[54]#011train-rmse:1.05946#011validation-rmse:3.53222[0m
[34mStopping. Best iteration:[0m
[34m[44]#011train-rmse:1.19662#011validation-rmse:3.5051
[0m
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
........................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
Completed 2.3 KiB/2.3 KiB (37.1 KiB/s) with 1 file(s) remaining
download: s3://sagemaker-us-east-2-656708836476/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
WARNING:root:There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='0.90-1'. For example:
get_image_uri(region, 'xgboost', '0.90-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2020-02-01 03:35:14 Starting - Launching requested ML instances......
2020-02-01 03:36:12 Starting - Preparing the instances for training......
2020-02-01 03:37:13 Downloading - Downloading input data...
2020-02-01 03:37:46 Training - Downloading the training image..[34mArguments: train[0m
[34m[2020-02-01:03:38:07:INFO] Running standalone xgboost training.[0m
[34m[2020-02-01:03:38:07:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8504.19mb[0m
[34m[2020-02-01:03:38:07:INFO] Determined delimiter of CSV input is ','[0m
[34m[03:38:07] S3DistributionType set as FullyReplicated[0m
[34m[03:38:07] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2020-02-01:03:38:07:INFO] Determined delimiter of CSV input is ','[0m
[34m[03:38:07] S3DistributionType set as FullyReplicated[0m
[34m[03:38:07] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[03:38:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[0]#011train-rmse:19.1172#011validation-rmse:18.6556[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[03:38:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[1]#011train-rmse:15.6297#011validation-rmse:15.272[0m
[34m[03:38:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[2]#011train-rmse:12.7775#011validation-rmse:12.51[0m
[34m[03:38:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[3]#011train-rmse:10.5138#011validation-rmse:10.265[0m
[34m[03:38:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[4]#011train-rmse:8.69958#011validation-rmse:8.48376[0m
[34m[03:38:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[5]#011train-rmse:7.32432#011validation-rmse:7.09967[0m
[34m[03:38:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[6]#011train-rmse:6.19629#011validation-rmse:6.12459[0m
[34m[03:38:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[7]#011train-rmse:5.29551#011validation-rmse:5.24242[0m
[34m[03:38:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[8]#011train-rmse:4.57674#011validation-rmse:4.61836[0m
[34m[03:38:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[9]#011train-rmse:3.99336#011validation-rmse:4.22168[0m
[34m[03:38:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[10]#011train-rmse:3.52629#011validation-rmse:3.9051[0m
[34m[03:38:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[11]#011train-rmse:3.16166#011validation-rmse:3.65637[0m
[34m[03:38:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[12]#011train-rmse:2.89004#011validation-rmse:3.48761[0m
[34m[03:38:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[13]#011train-rmse:2.63432#011validation-rmse:3.41627[0m
[34m[03:38:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[14]#011train-rmse:2.48286#011validation-rmse:3.36191[0m
[34m[03:38:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[15]#011train-rmse:2.30543#011validation-rmse:3.2967[0m
[34m[03:38:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[16]#011train-rmse:2.22745#011validation-rmse:3.27292[0m
[34m[03:38:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[17]#011train-rmse:2.13328#011validation-rmse:3.1846[0m
[34m[03:38:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[18]#011train-rmse:2.04643#011validation-rmse:3.19862[0m
[34m[03:38:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[19]#011train-rmse:1.9664#011validation-rmse:3.2279[0m
[34m[03:38:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[20]#011train-rmse:1.90068#011validation-rmse:3.23915[0m
[34m[03:38:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[21]#011train-rmse:1.86218#011validation-rmse:3.22959[0m
[34m[03:38:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[22]#011train-rmse:1.82202#011validation-rmse:3.24607[0m
[34m[03:38:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[23]#011train-rmse:1.77011#011validation-rmse:3.22517[0m
[34m[03:38:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[24]#011train-rmse:1.6661#011validation-rmse:3.22185[0m
[34m[03:38:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[25]#011train-rmse:1.62356#011validation-rmse:3.22954[0m
[34m[03:38:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[26]#011train-rmse:1.59638#011validation-rmse:3.2245[0m
[34m[03:38:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[27]#011train-rmse:1.54878#011validation-rmse:3.21692[0m
[34mStopping. Best iteration:[0m
[34m[17]#011train-rmse:2.13328#011validation-rmse:3.1846
[0m
2020-02-01 03:38:19 Uploading - Uploading generated training model
2020-02-01 03:38:19 Completed - Training job completed
Training seconds: 66
Billable seconds: 66
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
.............................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
Completed 2.3 KiB/2.3 KiB (32.9 KiB/s) with 1 file(s) remaining
download: s3://sagemaker-ap-northeast-2-148514131281/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
'get_image_uri' method will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='1.0-1'. For example:
get_image_uri(region, 'xgboost', '1.0-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2020-08-29 08:34:50 Starting - Starting the training job...
2020-08-29 08:34:52 Starting - Launching requested ML instances......
2020-08-29 08:35:59 Starting - Preparing the instances for training........................
2020-08-29 08:40:09 Starting - Insufficient capacity error from EC2 while launching instances, retrying!.........
2020-08-29 08:41:41 Starting - Preparing the instances for training......
2020-08-29 08:42:39 Downloading - Downloading input data...
2020-08-29 08:43:07 Training - Downloading the training image..[34mArguments: train[0m
[34m[2020-08-29:08:43:28:INFO] Running standalone xgboost training.[0m
[34m[2020-08-29:08:43:28:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8486.28mb[0m
[34m[2020-08-29:08:43:28:INFO] Determined delimiter of CSV input is ','[0m
[34m[08:43:28] S3DistributionType set as FullyReplicated[0m
[34m[08:43:28] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2020-08-29:08:43:28:INFO] Determined delimiter of CSV input is ','[0m
[34m[08:43:28] S3DistributionType set as FullyReplicated[0m
[34m[08:43:28] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[0]#011train-rmse:18.7029#011validation-rmse:19.6951[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[1]#011train-rmse:15.275#011validation-rmse:16.4419[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=3[0m
[34m[2]#011train-rmse:12.5144#011validation-rmse:13.7609[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[3]#011train-rmse:10.3218#011validation-rmse:11.6737[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[4]#011train-rmse:8.49589#011validation-rmse:9.90509[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[5]#011train-rmse:7.10867#011validation-rmse:8.61485[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[6]#011train-rmse:6.01999#011validation-rmse:7.61111[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[7]#011train-rmse:5.16222#011validation-rmse:6.82755[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[8]#011train-rmse:4.50016#011validation-rmse:6.28904[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[9]#011train-rmse:3.94681#011validation-rmse:5.83838[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[10]#011train-rmse:3.49877#011validation-rmse:5.46054[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[11]#011train-rmse:3.1799#011validation-rmse:5.1954[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[12]#011train-rmse:2.91891#011validation-rmse:4.9771[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[13]#011train-rmse:2.68051#011validation-rmse:4.80496[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14]#011train-rmse:2.5175#011validation-rmse:4.6281[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[15]#011train-rmse:2.38476#011validation-rmse:4.58223[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[16]#011train-rmse:2.25601#011validation-rmse:4.48618[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[17]#011train-rmse:2.17092#011validation-rmse:4.43051[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[18]#011train-rmse:2.0746#011validation-rmse:4.42643[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[19]#011train-rmse:2.01785#011validation-rmse:4.37482[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[20]#011train-rmse:1.95759#011validation-rmse:4.31231[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[21]#011train-rmse:1.91203#011validation-rmse:4.24657[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[22]#011train-rmse:1.86587#011validation-rmse:4.18259[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[23]#011train-rmse:1.82478#011validation-rmse:4.16257[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[24]#011train-rmse:1.7847#011validation-rmse:4.10561[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[25]#011train-rmse:1.76423#011validation-rmse:4.13305[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[26]#011train-rmse:1.7219#011validation-rmse:4.07273[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[27]#011train-rmse:1.70112#011validation-rmse:4.03749[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[28]#011train-rmse:1.68454#011validation-rmse:4.06049[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[29]#011train-rmse:1.65073#011validation-rmse:4.02017[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[30]#011train-rmse:1.62359#011validation-rmse:4.02385[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[31]#011train-rmse:1.58176#011validation-rmse:4.04744[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[32]#011train-rmse:1.57692#011validation-rmse:4.05676[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[33]#011train-rmse:1.55374#011validation-rmse:4.08729[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 2 pruned nodes, max_depth=4[0m
[34m[34]#011train-rmse:1.54066#011validation-rmse:4.0484[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[35]#011train-rmse:1.52079#011validation-rmse:4.01949[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[36]#011train-rmse:1.47634#011validation-rmse:4.00968[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[37]#011train-rmse:1.45368#011validation-rmse:3.9975[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=4[0m
[34m[38]#011train-rmse:1.42378#011validation-rmse:3.99711[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[39]#011train-rmse:1.39891#011validation-rmse:4.00692[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[40]#011train-rmse:1.38219#011validation-rmse:4.01636[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[41]#011train-rmse:1.37383#011validation-rmse:4.02698[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 8 pruned nodes, max_depth=2[0m
[34m[42]#011train-rmse:1.36283#011validation-rmse:3.99468[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 14 pruned nodes, max_depth=1[0m
[34m[43]#011train-rmse:1.35721#011validation-rmse:3.97173[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[44]#011train-rmse:1.34357#011validation-rmse:3.9473[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[45]#011train-rmse:1.33816#011validation-rmse:3.97051[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[46]#011train-rmse:1.32664#011validation-rmse:3.94102[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[47]#011train-rmse:1.30885#011validation-rmse:3.95296[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[48]#011train-rmse:1.29652#011validation-rmse:3.92841[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 4 pruned nodes, max_depth=4[0m
[34m[49]#011train-rmse:1.27754#011validation-rmse:3.90936[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[50]#011train-rmse:1.26342#011validation-rmse:3.91605[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[51]#011train-rmse:1.25783#011validation-rmse:3.91868[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 18 pruned nodes, max_depth=5[0m
[34m[52]#011train-rmse:1.2374#011validation-rmse:3.93203[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 12 pruned nodes, max_depth=1[0m
[34m[53]#011train-rmse:1.23427#011validation-rmse:3.91722[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 12 pruned nodes, max_depth=2[0m
[34m[54]#011train-rmse:1.23743#011validation-rmse:3.9357[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[55]#011train-rmse:1.21955#011validation-rmse:3.92035[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[56]#011train-rmse:1.18865#011validation-rmse:3.94761[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 14 pruned nodes, max_depth=2[0m
[34m[57]#011train-rmse:1.17002#011validation-rmse:3.95552[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 18 pruned nodes, max_depth=3[0m
[34m[58]#011train-rmse:1.13935#011validation-rmse:3.96458[0m
[34m[08:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 14 pruned nodes, max_depth=1[0m
[34m[59]#011train-rmse:1.13655#011validation-rmse:3.951[0m
[34mStopping. Best iteration:[0m
[34m[49]#011train-rmse:1.27754#011validation-rmse:3.90936
[0m
2020-08-29 08:43:40 Uploading - Uploading generated training model
2020-08-29 08:43:40 Completed - Training job completed
Training seconds: 61
Billable seconds: 61
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
.......................................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
download: s3://sagemaker-eu-west-1-100264508876/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
WARNING:root:There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='0.90-1'. For example:
get_image_uri(region, 'xgboost', '0.90-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2020-05-01 10:47:14 Starting - Launching requested ML instances.........
2020-05-01 10:48:38 Starting - Preparing the instances for training......
2020-05-01 10:49:44 Downloading - Downloading input data
2020-05-01 10:49:44 Training - Downloading the training image...
2020-05-01 10:50:10 Uploading - Uploading generated training model[34mArguments: train[0m
[34m[2020-05-01:10:50:05:INFO] Running standalone xgboost training.[0m
[34m[2020-05-01:10:50:05:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8505.26mb[0m
[34m[2020-05-01:10:50:05:INFO] Determined delimiter of CSV input is ','[0m
[34m[10:50:05] S3DistributionType set as FullyReplicated[0m
[34m[10:50:05] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2020-05-01:10:50:05:INFO] Determined delimiter of CSV input is ','[0m
[34m[10:50:05] S3DistributionType set as FullyReplicated[0m
[34m[10:50:05] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[10:50:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[0]#011train-rmse:19.7521#011validation-rmse:20.1513[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[10:50:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[1]#011train-rmse:16.0784#011validation-rmse:16.5989[0m
[34m[10:50:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=4[0m
[34m[2]#011train-rmse:13.1605#011validation-rmse:13.7377[0m
[34m[10:50:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=4[0m
[34m[3]#011train-rmse:10.8641#011validation-rmse:11.4992[0m
[34m[10:50:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[4]#011train-rmse:8.95659#011validation-rmse:9.63835[0m
[34m[10:50:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[5]#011train-rmse:7.48741#011validation-rmse:8.20246[0m
[34m[10:50:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[6]#011train-rmse:6.34253#011validation-rmse:7.17045[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[7]#011train-rmse:5.35219#011validation-rmse:6.36555[0m
[34m[8]#011train-rmse:4.5632#011validation-rmse:5.71432[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[9]#011train-rmse:3.90932#011validation-rmse:5.26637[0m
[34m[10]#011train-rmse:3.40825#011validation-rmse:4.92068[0m
[34m[11]#011train-rmse:3.02481#011validation-rmse:4.73513[0m
[34m[12]#011train-rmse:2.72427#011validation-rmse:4.57922[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[13]#011train-rmse:2.52085#011validation-rmse:4.48979[0m
[34m[14]#011train-rmse:2.33247#011validation-rmse:4.38224[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[15]#011train-rmse:2.21231#011validation-rmse:4.32762[0m
[34m[16]#011train-rmse:2.09356#011validation-rmse:4.27357[0m
[34m[17]#011train-rmse:2.02211#011validation-rmse:4.22485[0m
[34m[18]#011train-rmse:1.95408#011validation-rmse:4.18295[0m
[34m[19]#011train-rmse:1.88318#011validation-rmse:4.15666[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[20]#011train-rmse:1.79872#011validation-rmse:4.16034[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[21]#011train-rmse:1.75617#011validation-rmse:4.149[0m
[34m[22]#011train-rmse:1.70418#011validation-rmse:4.11824[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[23]#011train-rmse:1.66811#011validation-rmse:4.14435[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[24]#011train-rmse:1.63033#011validation-rmse:4.15155[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[25]#011train-rmse:1.55656#011validation-rmse:4.14606[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[26]#011train-rmse:1.52191#011validation-rmse:4.1627[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[27]#011train-rmse:1.47521#011validation-rmse:4.15834[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[28]#011train-rmse:1.44399#011validation-rmse:4.15041[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[29]#011train-rmse:1.38052#011validation-rmse:4.1171[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[30]#011train-rmse:1.3269#011validation-rmse:4.1475[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[31]#011train-rmse:1.28994#011validation-rmse:4.1578[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[32]#011train-rmse:1.25981#011validation-rmse:4.1644[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[33]#011train-rmse:1.23546#011validation-rmse:4.16078[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[34]#011train-rmse:1.2145#011validation-rmse:4.15819[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[35]#011train-rmse:1.19571#011validation-rmse:4.14231[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[36]#011train-rmse:1.15502#011validation-rmse:4.12796[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[37]#011train-rmse:1.14195#011validation-rmse:4.12794[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[38]#011train-rmse:1.09873#011validation-rmse:4.12835[0m
[34m[10:50:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 10 pruned nodes, max_depth=4[0m
[34m[39]#011train-rmse:1.0933#011validation-rmse:4.12415[0m
[34mStopping. Best iteration:[0m
[34m[29]#011train-rmse:1.38052#011validation-rmse:4.1171
[0m
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
........................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
Completed 2.3 KiB/2.3 KiB (35.8 KiB/s) with 1 file(s) remaining
download: s3://sagemaker-us-east-2-180564272071/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
_____no_output_____
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2019-04-15 11:25:42 Starting - Launching requested ML instances......
2019-04-15 11:26:45 Starting - Preparing the instances for training......
2019-04-15 11:27:51 Downloading - Downloading input data..
[31mArguments: train[0m
[31m[2019-04-15:11:28:23:INFO] Running standalone xgboost training.[0m
[31m[2019-04-15:11:28:23:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8410.32mb[0m
[31m[2019-04-15:11:28:23:INFO] Determined delimiter of CSV input is ','[0m
[31m[11:28:23] S3DistributionType set as FullyReplicated[0m
[31m[11:28:23] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[31m[2019-04-15:11:28:23:INFO] Determined delimiter of CSV input is ','[0m
[31m[11:28:23] S3DistributionType set as FullyReplicated[0m
[31m[11:28:23] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 0 pruned nodes, max_depth=3[0m
[31m[0]#011train-rmse:19.9065#011validation-rmse:18.6901[0m
[31mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[31mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 0 pruned nodes, max_depth=3[0m
[31m[1]#011train-rmse:16.2867#011validation-rmse:15.2844[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[2]#011train-rmse:13.3261#011validation-rmse:12.5001[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=4[0m
[31m[3]#011train-rmse:10.9774#011validation-rmse:10.2619[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=4[0m
[31m[4]#011train-rmse:9.09364#011validation-rmse:8.75573[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5[0m
[31m[5]#011train-rmse:7.62318#011validation-rmse:7.50781[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[6]#011train-rmse:6.46277#011validation-rmse:6.59186[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[7]#011train-rmse:5.57428#011validation-rmse:5.93149[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[8]#011train-rmse:4.81609#011validation-rmse:5.27565[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[9]#011train-rmse:4.22447#011validation-rmse:4.88894[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[10]#011train-rmse:3.70744#011validation-rmse:4.52316[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[11]#011train-rmse:3.32707#011validation-rmse:4.27038[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[12]#011train-rmse:3.05772#011validation-rmse:4.17343[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[13]#011train-rmse:2.8421#011validation-rmse:4.12927[0m
[31m[14]#011train-rmse:2.6406#011validation-rmse:3.95668[0m
[31m[15]#011train-rmse:2.52359#011validation-rmse:3.8854[0m
[31m[16]#011train-rmse:2.41457#011validation-rmse:3.90974[0m
[31m[17]#011train-rmse:2.31033#011validation-rmse:3.83253[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[18]#011train-rmse:2.22727#011validation-rmse:3.76019[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[19]#011train-rmse:2.10795#011validation-rmse:3.63786[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[20]#011train-rmse:2.05875#011validation-rmse:3.64712[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[21]#011train-rmse:2.00617#011validation-rmse:3.60796[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[22]#011train-rmse:1.98138#011validation-rmse:3.58907[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[23]#011train-rmse:1.90768#011validation-rmse:3.54222[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[24]#011train-rmse:1.85772#011validation-rmse:3.53911[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[25]#011train-rmse:1.83182#011validation-rmse:3.49351[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[26]#011train-rmse:1.77594#011validation-rmse:3.47132[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5[0m
[31m[27]#011train-rmse:1.69735#011validation-rmse:3.43927[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[28]#011train-rmse:1.66622#011validation-rmse:3.45545[0m
[31m[29]#011train-rmse:1.63549#011validation-rmse:3.43284[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5[0m
[31m[30]#011train-rmse:1.58835#011validation-rmse:3.43761[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[31]#011train-rmse:1.5632#011validation-rmse:3.427[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5[0m
[31m[32]#011train-rmse:1.53995#011validation-rmse:3.43876[0m
[31m[33]#011train-rmse:1.50094#011validation-rmse:3.45964[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[34]#011train-rmse:1.46876#011validation-rmse:3.44794[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[35]#011train-rmse:1.43595#011validation-rmse:3.45971[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 12 pruned nodes, max_depth=2[0m
[31m[36]#011train-rmse:1.40873#011validation-rmse:3.40671[0m
[31m[37]#011train-rmse:1.34522#011validation-rmse:3.39946[0m
[31m[38]#011train-rmse:1.29936#011validation-rmse:3.32717[0m
[31m[39]#011train-rmse:1.27493#011validation-rmse:3.34027[0m
[31m[40]#011train-rmse:1.26523#011validation-rmse:3.32152[0m
[31m[41]#011train-rmse:1.24979#011validation-rmse:3.32939[0m
[31m[42]#011train-rmse:1.24133#011validation-rmse:3.3182[0m
[31m[43]#011train-rmse:1.22602#011validation-rmse:3.30243[0m
[31m[44]#011train-rmse:1.19154#011validation-rmse:3.31035[0m
[31m[45]#011train-rmse:1.16447#011validation-rmse:3.26667[0m
[31m[46]#011train-rmse:1.14887#011validation-rmse:3.27873[0m
[31m[47]#011train-rmse:1.1278#011validation-rmse:3.27807[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 6 pruned nodes, max_depth=2[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 18 pruned nodes, max_depth=4[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 2 pruned nodes, max_depth=4[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=4[0m
[31m[48]#011train-rmse:1.11381#011validation-rmse:3.28562[0m
[31m[49]#011train-rmse:1.0787#011validation-rmse:3.25942[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 10 pruned nodes, max_depth=4[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 22 pruned nodes, max_depth=2[0m
[31m[50]#011train-rmse:1.06935#011validation-rmse:3.2378[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=4[0m
[31m[51]#011train-rmse:1.04705#011validation-rmse:3.22798[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 22 pruned nodes, max_depth=3[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 8 pruned nodes, max_depth=2[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 12 pruned nodes, max_depth=2[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 8 pruned nodes, max_depth=4[0m
[31m[52]#011train-rmse:1.02238#011validation-rmse:3.21924[0m
[31m[53]#011train-rmse:1.01524#011validation-rmse:3.21879[0m
[31m[54]#011train-rmse:1.00226#011validation-rmse:3.23297[0m
[31m[55]#011train-rmse:0.99439#011validation-rmse:3.24438[0m
[31m[56]#011train-rmse:0.979284#011validation-rmse:3.24914[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=4[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 16 pruned nodes, max_depth=4[0m
[31m[57]#011train-rmse:0.963044#011validation-rmse:3.23341[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 10 pruned nodes, max_depth=4[0m
[31m[58]#011train-rmse:0.947#011validation-rmse:3.24382[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[59]#011train-rmse:0.940707#011validation-rmse:3.24347[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 16 pruned nodes, max_depth=0[0m
[31m[60]#011train-rmse:0.94086#011validation-rmse:3.24351[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 18 pruned nodes, max_depth=0[0m
[31m[61]#011train-rmse:0.940686#011validation-rmse:3.24346[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5[0m
[31m[62]#011train-rmse:0.92834#011validation-rmse:3.23796[0m
[31m[11:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 8 pruned nodes, max_depth=3[0m
[31m[63]#011train-rmse:0.921661#011validation-rmse:3.22149[0m
[31mStopping. Best iteration:[0m
[31m[53]#011train-rmse:1.01524#011validation-rmse:3.21879
[0m
2019-04-15 11:28:34 Training - Training image download completed. Training in progress.
2019-04-15 11:28:34 Uploading - Uploading generated training model
2019-04-15 11:28:34 Completed - Training job completed
Billable seconds: 44
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
training_job_info['ModelArtifacts'].keys()
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
..........................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
download: s3://sagemaker-eu-west-1-345073139350/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
rm: cannot remove ‘../data/boston/*’: No such file or directory
rmdir: failed to remove ‘../data/boston’: No such file or directory
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
_____no_output_____
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
_____no_output_____
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
_____no_output_____
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
'get_image_uri' method will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='1.0-1'. For example:
get_image_uri(region, 'xgboost', '1.0-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2020-08-16 17:24:31 Starting - Launching requested ML instances.........
2020-08-16 17:25:52 Starting - Preparing the instances for training......
2020-08-16 17:26:50 Downloading - Downloading input data
2020-08-16 17:26:50 Training - Downloading the training image..[34mArguments: train[0m
[34m[2020-08-16:17:27:11:INFO] Running standalone xgboost training.[0m
[34m[2020-08-16:17:27:11:INFO] File size need to be processed in the node: 0.03mb. Available memory size in the node: 8485.08mb[0m
[34m[2020-08-16:17:27:11:INFO] Determined delimiter of CSV input is ','[0m
[34m[17:27:11] S3DistributionType set as FullyReplicated[0m
[34m[17:27:11] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2020-08-16:17:27:11:INFO] Determined delimiter of CSV input is ','[0m
[34m[17:27:11] S3DistributionType set as FullyReplicated[0m
[34m[17:27:11] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 2 pruned nodes, max_depth=3[0m
[34m[0]#011train-rmse:20.2265#011validation-rmse:19.3455[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[1]#011train-rmse:16.4853#011validation-rmse:15.9476[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=3[0m
[34m[2]#011train-rmse:13.4304#011validation-rmse:13.3301[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[3]#011train-rmse:11.0452#011validation-rmse:11.342[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[4]#011train-rmse:9.15469#011validation-rmse:9.86338[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[5]#011train-rmse:7.67557#011validation-rmse:8.69899[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[6]#011train-rmse:6.39794#011validation-rmse:7.72622[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[7]#011train-rmse:5.44119#011validation-rmse:7.18118[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[8]#011train-rmse:4.70627#011validation-rmse:6.74208[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[9]#011train-rmse:4.10401#011validation-rmse:6.37084[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[10]#011train-rmse:3.61132#011validation-rmse:6.12419[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[11]#011train-rmse:3.2893#011validation-rmse:5.97836[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[12]#011train-rmse:2.96522#011validation-rmse:5.79106[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[13]#011train-rmse:2.7268#011validation-rmse:5.65702[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14]#011train-rmse:2.50642#011validation-rmse:5.47376[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[15]#011train-rmse:2.34554#011validation-rmse:5.41039[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[16]#011train-rmse:2.21067#011validation-rmse:5.38938[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[17]#011train-rmse:2.08006#011validation-rmse:5.44885[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[18]#011train-rmse:2.01389#011validation-rmse:5.38357[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[19]#011train-rmse:1.9154#011validation-rmse:5.28994[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[20]#011train-rmse:1.83877#011validation-rmse:5.21545[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[21]#011train-rmse:1.79577#011validation-rmse:5.20044[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[22]#011train-rmse:1.73453#011validation-rmse:5.2242[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[23]#011train-rmse:1.70339#011validation-rmse:5.18571[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[24]#011train-rmse:1.6488#011validation-rmse:5.16807[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[25]#011train-rmse:1.57292#011validation-rmse:5.12246[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[26]#011train-rmse:1.51913#011validation-rmse:5.12638[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[27]#011train-rmse:1.46548#011validation-rmse:5.15762[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[28]#011train-rmse:1.44039#011validation-rmse:5.12817[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[29]#011train-rmse:1.41186#011validation-rmse:5.11006[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[30]#011train-rmse:1.36725#011validation-rmse:5.11709[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[31]#011train-rmse:1.35027#011validation-rmse:5.12082[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[32]#011train-rmse:1.3314#011validation-rmse:5.11608[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[33]#011train-rmse:1.27882#011validation-rmse:5.07585[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[34]#011train-rmse:1.26015#011validation-rmse:5.09982[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[35]#011train-rmse:1.2427#011validation-rmse:5.08117[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[36]#011train-rmse:1.19904#011validation-rmse:5.10216[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[37]#011train-rmse:1.1737#011validation-rmse:5.1217[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[38]#011train-rmse:1.14549#011validation-rmse:5.1237[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[39]#011train-rmse:1.12599#011validation-rmse:5.11164[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 14 pruned nodes, max_depth=4[0m
[34m[40]#011train-rmse:1.10173#011validation-rmse:5.09333[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 14 pruned nodes, max_depth=3[0m
[34m[41]#011train-rmse:1.09267#011validation-rmse:5.07198[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[42]#011train-rmse:1.07007#011validation-rmse:5.07637[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[43]#011train-rmse:1.05392#011validation-rmse:5.07178[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[44]#011train-rmse:1.03749#011validation-rmse:5.03848[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 10 pruned nodes, max_depth=4[0m
[34m[45]#011train-rmse:1.02603#011validation-rmse:5.01586[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[46]#011train-rmse:1.01061#011validation-rmse:4.97294[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 10 pruned nodes, max_depth=4[0m
[34m[47]#011train-rmse:0.992629#011validation-rmse:4.96537[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[48]#011train-rmse:0.981203#011validation-rmse:4.96091[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 10 pruned nodes, max_depth=4[0m
[34m[49]#011train-rmse:0.970397#011validation-rmse:4.95208[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 18 pruned nodes, max_depth=2[0m
[34m[50]#011train-rmse:0.962033#011validation-rmse:4.9497[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[51]#011train-rmse:0.948746#011validation-rmse:4.95781[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 24 pruned nodes, max_depth=3[0m
[34m[52]#011train-rmse:0.943393#011validation-rmse:4.92282[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 12 pruned nodes, max_depth=3[0m
[34m[53]#011train-rmse:0.93655#011validation-rmse:4.9068[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 12 pruned nodes, max_depth=4[0m
[34m[54]#011train-rmse:0.920382#011validation-rmse:4.90073[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 10 pruned nodes, max_depth=4[0m
[34m[55]#011train-rmse:0.909817#011validation-rmse:4.88985[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 24 pruned nodes, max_depth=4[0m
[34m[56]#011train-rmse:0.900043#011validation-rmse:4.88699[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[57]#011train-rmse:0.880627#011validation-rmse:4.88204[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[58]#011train-rmse:0.867286#011validation-rmse:4.87112[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 14 pruned nodes, max_depth=4[0m
[34m[59]#011train-rmse:0.853921#011validation-rmse:4.87381[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 18 pruned nodes, max_depth=0[0m
[34m[60]#011train-rmse:0.853825#011validation-rmse:4.87472[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 14 pruned nodes, max_depth=4[0m
[34m[61]#011train-rmse:0.845434#011validation-rmse:4.8644[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 18 pruned nodes, max_depth=3[0m
[34m[62]#011train-rmse:0.838463#011validation-rmse:4.86851[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 14 pruned nodes, max_depth=0[0m
[34m[63]#011train-rmse:0.83856#011validation-rmse:4.86734[0m
[34m[64]#011train-rmse:0.827451#011validation-rmse:4.86207[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 22 pruned nodes, max_depth=4[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 22 pruned nodes, max_depth=0[0m
[34m[65]#011train-rmse:0.82744#011validation-rmse:4.86217[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 16 pruned nodes, max_depth=3[0m
[34m[66]#011train-rmse:0.820681#011validation-rmse:4.86553[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 18 pruned nodes, max_depth=5[0m
[34m[67]#011train-rmse:0.810196#011validation-rmse:4.86026[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 16 pruned nodes, max_depth=2[0m
[34m[68]#011train-rmse:0.807359#011validation-rmse:4.85853[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 24 pruned nodes, max_depth=0[0m
[34m[69]#011train-rmse:0.807286#011validation-rmse:4.85968[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 16 pruned nodes, max_depth=0[0m
[34m[70]#011train-rmse:0.807282#011validation-rmse:4.86012[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 34 pruned nodes, max_depth=0[0m
[34m[71]#011train-rmse:0.807352#011validation-rmse:4.86148[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 18 pruned nodes, max_depth=0[0m
[34m[72]#011train-rmse:0.807282#011validation-rmse:4.85987[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 20 pruned nodes, max_depth=2[0m
[34m[73]#011train-rmse:0.803372#011validation-rmse:4.85471[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 12 pruned nodes, max_depth=0[0m
[34m[74]#011train-rmse:0.803403#011validation-rmse:4.85417[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 24 pruned nodes, max_depth=0[0m
[34m[75]#011train-rmse:0.803381#011validation-rmse:4.85452[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 22 pruned nodes, max_depth=0[0m
[34m[76]#011train-rmse:0.803367#011validation-rmse:4.85486[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 22 pruned nodes, max_depth=0[0m
[34m[77]#011train-rmse:0.803367#011validation-rmse:4.85561[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 16 pruned nodes, max_depth=0[0m
[34m[78]#011train-rmse:0.803436#011validation-rmse:4.85671[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 32 pruned nodes, max_depth=0[0m
[34m[79]#011train-rmse:0.803489#011validation-rmse:4.85718[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 18 pruned nodes, max_depth=0[0m
[34m[80]#011train-rmse:0.803413#011validation-rmse:4.85646[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 18 pruned nodes, max_depth=0[0m
[34m[81]#011train-rmse:0.80339#011validation-rmse:4.85614[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 10 pruned nodes, max_depth=0[0m
[34m[82]#011train-rmse:0.803387#011validation-rmse:4.8544[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 18 pruned nodes, max_depth=0[0m
[34m[83]#011train-rmse:0.803425#011validation-rmse:4.8539[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 20 pruned nodes, max_depth=0[0m
[34m[84]#011train-rmse:0.803369#011validation-rmse:4.8548[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 20 pruned nodes, max_depth=0[0m
[34m[85]#011train-rmse:0.803426#011validation-rmse:4.85389[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 16 pruned nodes, max_depth=0[0m
[34m[86]#011train-rmse:0.803385#011validation-rmse:4.85444[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 22 pruned nodes, max_depth=0[0m
[34m[87]#011train-rmse:0.803385#011validation-rmse:4.85444[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 22 pruned nodes, max_depth=0[0m
[34m[88]#011train-rmse:0.803434#011validation-rmse:4.85381[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 8 pruned nodes, max_depth=4[0m
[34m[89]#011train-rmse:0.795356#011validation-rmse:4.86775[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 18 pruned nodes, max_depth=0[0m
[34m[90]#011train-rmse:0.795354#011validation-rmse:4.8676[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 18 pruned nodes, max_depth=0[0m
[34m[91]#011train-rmse:0.795366#011validation-rmse:4.86682[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 24 pruned nodes, max_depth=3[0m
[34m[92]#011train-rmse:0.787607#011validation-rmse:4.88536[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 22 pruned nodes, max_depth=2[0m
[34m[93]#011train-rmse:0.784493#011validation-rmse:4.8835[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 20 pruned nodes, max_depth=0[0m
[34m[94]#011train-rmse:0.784439#011validation-rmse:4.8838[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 26 pruned nodes, max_depth=0[0m
[34m[95]#011train-rmse:0.784262#011validation-rmse:4.88556[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 30 pruned nodes, max_depth=0[0m
[34m[96]#011train-rmse:0.784259#011validation-rmse:4.88566[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 26 pruned nodes, max_depth=0[0m
[34m[97]#011train-rmse:0.784257#011validation-rmse:4.88571[0m
[34m[17:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 26 pruned nodes, max_depth=0[0m
[34m[98]#011train-rmse:0.784257#011validation-rmse:4.88631[0m
[34mStopping. Best iteration:[0m
[34m[88]#011train-rmse:0.803434#011validation-rmse:4.85381
[0m
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
..........................................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
_____no_output_____
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
_____no_output_____
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2018-12-05 05:31:40 Starting - Starting the training job...
2018-12-05 05:32:05 Starting - Launching requested ML instances......
2018-12-05 05:33:03 Starting - Preparing the instances for training.........
2018-12-05 05:34:41 Downloading - Downloading input data
2018-12-05 05:34:41 Training - Downloading the training image.
[31mArguments: train[0m
[31m[2018-12-05:05:34:45:INFO] Running standalone xgboost training.[0m
[31m[2018-12-05:05:34:45:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8388.47mb[0m
[31m[2018-12-05:05:34:45:INFO] Determined delimiter of CSV input is ','[0m
[31m[05:34:45] S3DistributionType set as FullyReplicated[0m
[31m[05:34:45] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[31m[2018-12-05:05:34:45:INFO] Determined delimiter of CSV input is ','[0m
[31m[05:34:45] S3DistributionType set as FullyReplicated[0m
[31m[05:34:45] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 0 pruned nodes, max_depth=3[0m
[31m[0]#011train-rmse:18.9172#011validation-rmse:21.5178[0m
[31mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[31mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 0 pruned nodes, max_depth=3[0m
[31m[1]#011train-rmse:15.4002#011validation-rmse:17.6521[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=4[0m
[31m[2]#011train-rmse:12.6235#011validation-rmse:14.7216[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=4[0m
[31m[3]#011train-rmse:10.3477#011validation-rmse:12.3026[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=4[0m
[31m[4]#011train-rmse:8.58284#011validation-rmse:10.457[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=4[0m
[31m[5]#011train-rmse:7.17488#011validation-rmse:8.99078[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[6]#011train-rmse:5.97956#011validation-rmse:7.86737[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[7]#011train-rmse:5.05166#011validation-rmse:7.03463[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[8]#011train-rmse:4.33317#011validation-rmse:6.40078[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[9]#011train-rmse:3.75081#011validation-rmse:5.85356[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[10]#011train-rmse:3.28174#011validation-rmse:5.43365[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[11]#011train-rmse:2.91581#011validation-rmse:5.16029[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[12]#011train-rmse:2.64005#011validation-rmse:5.00644[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[13]#011train-rmse:2.3951#011validation-rmse:4.90395[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[14]#011train-rmse:2.22468#011validation-rmse:4.81022[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[15]#011train-rmse:2.05786#011validation-rmse:4.70601[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[16]#011train-rmse:1.93112#011validation-rmse:4.61248[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5[0m
[31m[17]#011train-rmse:1.84605#011validation-rmse:4.58024[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5[0m
[31m[18]#011train-rmse:1.79227#011validation-rmse:4.53928[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5[0m
[31m[19]#011train-rmse:1.73342#011validation-rmse:4.52377[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[20]#011train-rmse:1.66078#011validation-rmse:4.54059[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[21]#011train-rmse:1.63031#011validation-rmse:4.55337[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[22]#011train-rmse:1.59953#011validation-rmse:4.52276[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5[0m
[31m[23]#011train-rmse:1.55256#011validation-rmse:4.51886[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5[0m
[31m[24]#011train-rmse:1.49168#011validation-rmse:4.48123[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5[0m
[31m[25]#011train-rmse:1.44604#011validation-rmse:4.4447[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[26]#011train-rmse:1.40946#011validation-rmse:4.43294[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5[0m
[31m[27]#011train-rmse:1.35453#011validation-rmse:4.42985[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5[0m
[31m[28]#011train-rmse:1.31068#011validation-rmse:4.45228[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5[0m
[31m[29]#011train-rmse:1.27766#011validation-rmse:4.42568[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 14 pruned nodes, max_depth=5[0m
[31m[30]#011train-rmse:1.22776#011validation-rmse:4.43719[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5[0m
[31m[31]#011train-rmse:1.20054#011validation-rmse:4.41115[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 10 pruned nodes, max_depth=2[0m
[31m[32]#011train-rmse:1.19482#011validation-rmse:4.40869[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 14 pruned nodes, max_depth=5[0m
[31m[33]#011train-rmse:1.16382#011validation-rmse:4.4266[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5[0m
[31m[34]#011train-rmse:1.13072#011validation-rmse:4.42917[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5[0m
[31m[35]#011train-rmse:1.11744#011validation-rmse:4.43787[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[31m[36]#011train-rmse:1.10004#011validation-rmse:4.41296[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 12 pruned nodes, max_depth=4[0m
[31m[37]#011train-rmse:1.09144#011validation-rmse:4.42654[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5[0m
[31m[38]#011train-rmse:1.08041#011validation-rmse:4.41043[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5[0m
[31m[39]#011train-rmse:1.04299#011validation-rmse:4.4048[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 14 pruned nodes, max_depth=5[0m
[31m[40]#011train-rmse:1.02203#011validation-rmse:4.39526[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[31m[41]#011train-rmse:1.00905#011validation-rmse:4.38558[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 12 pruned nodes, max_depth=5[0m
[31m[42]#011train-rmse:0.997572#011validation-rmse:4.36736[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 14 pruned nodes, max_depth=5[0m
[31m[43]#011train-rmse:0.979626#011validation-rmse:4.34729[0m
[31m[44]#011train-rmse:0.972102#011validation-rmse:4.33716[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 10 pruned nodes, max_depth=4[0m
[31m[45]#011train-rmse:0.950873#011validation-rmse:4.31641[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 14 pruned nodes, max_depth=5[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 10 pruned nodes, max_depth=5[0m
[31m[46]#011train-rmse:0.934356#011validation-rmse:4.33178[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 12 pruned nodes, max_depth=0[0m
[31m[47]#011train-rmse:0.934352#011validation-rmse:4.33296[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 18 pruned nodes, max_depth=4[0m
[31m[48]#011train-rmse:0.92892#011validation-rmse:4.30632[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 14 pruned nodes, max_depth=4[0m
[31m[49]#011train-rmse:0.917794#011validation-rmse:4.29065[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 18 pruned nodes, max_depth=5[0m
[31m[50]#011train-rmse:0.90184#011validation-rmse:4.293[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 26 pruned nodes, max_depth=3[0m
[31m[51]#011train-rmse:0.892298#011validation-rmse:4.29791[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5[0m
[31m[52]#011train-rmse:0.880179#011validation-rmse:4.28427[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 12 pruned nodes, max_depth=3[0m
[31m[53]#011train-rmse:0.875295#011validation-rmse:4.29341[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 26 pruned nodes, max_depth=0[0m
[31m[54]#011train-rmse:0.875366#011validation-rmse:4.29406[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 24 pruned nodes, max_depth=0[0m
[31m[55]#011train-rmse:0.875441#011validation-rmse:4.2946[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 6 pruned nodes, max_depth=4[0m
[31m[56]#011train-rmse:0.867401#011validation-rmse:4.29797[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 32 pruned nodes, max_depth=0[0m
[31m[57]#011train-rmse:0.867235#011validation-rmse:4.29632[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 24 pruned nodes, max_depth=0[0m
[31m[58]#011train-rmse:0.867207#011validation-rmse:4.29499[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 32 pruned nodes, max_depth=2[0m
[31m[59]#011train-rmse:0.86211#011validation-rmse:4.29494[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 20 pruned nodes, max_depth=3[0m
[31m[60]#011train-rmse:0.85272#011validation-rmse:4.33103[0m
[31m[61]#011train-rmse:0.842024#011validation-rmse:4.33756[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 14 pruned nodes, max_depth=5[0m
[31m[05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 26 pruned nodes, max_depth=0[0m
[31m[62]#011train-rmse:0.842114#011validation-rmse:4.33826[0m
[31mStopping. Best iteration:[0m
[31m[52]#011train-rmse:0.880179#011validation-rmse:4.28427
[0m
2018-12-05 05:34:51 Uploading - Uploading generated training model
2018-12-05 05:34:51 Completed - Training job completed
Billable seconds: 17
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
.......................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
download: s3://sagemaker-ap-south-1-651711011978/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
WARNING:root:There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='0.90-1'. For example:
get_image_uri(region, 'xgboost', '0.90-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2020-06-22 11:27:13 Starting - Launching requested ML instances.........
2020-06-22 11:28:20 Starting - Preparing the instances for training......
2020-06-22 11:29:16 Downloading - Downloading input data...
2020-06-22 11:30:09 Training - Training image download completed. Training in progress..[34mArguments: train[0m
[34m[2020-06-22:11:30:10:INFO] Running standalone xgboost training.[0m
[34m[2020-06-22:11:30:10:INFO] File size need to be processed in the node: 0.03mb. Available memory size in the node: 8502.01mb[0m
[34m[2020-06-22:11:30:10:INFO] Determined delimiter of CSV input is ','[0m
[34m[11:30:10] S3DistributionType set as FullyReplicated[0m
[34m[11:30:10] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2020-06-22:11:30:10:INFO] Determined delimiter of CSV input is ','[0m
[34m[11:30:10] S3DistributionType set as FullyReplicated[0m
[34m[11:30:10] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 2 pruned nodes, max_depth=3[0m
[34m[0]#011train-rmse:18.981#011validation-rmse:19.2766[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=4[0m
[34m[1]#011train-rmse:15.556#011validation-rmse:15.7182[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[2]#011train-rmse:12.8265#011validation-rmse:12.915[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[3]#011train-rmse:10.6264#011validation-rmse:10.5855[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[4]#011train-rmse:8.83662#011validation-rmse:8.747[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[5]#011train-rmse:7.42514#011validation-rmse:7.30804[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[6]#011train-rmse:6.34168#011validation-rmse:6.33101[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[7]#011train-rmse:5.45181#011validation-rmse:5.63067[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[8]#011train-rmse:4.74557#011validation-rmse:5.04[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[9]#011train-rmse:4.1905#011validation-rmse:4.56195[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[10]#011train-rmse:3.74638#011validation-rmse:4.26149[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[11]#011train-rmse:3.42449#011validation-rmse:4.01691[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[12]#011train-rmse:3.1267#011validation-rmse:3.83082[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[13]#011train-rmse:2.89156#011validation-rmse:3.68527[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14]#011train-rmse:2.67944#011validation-rmse:3.58707[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[15]#011train-rmse:2.55994#011validation-rmse:3.51551[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[16]#011train-rmse:2.46371#011validation-rmse:3.48704[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[17]#011train-rmse:2.34792#011validation-rmse:3.46315[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[18]#011train-rmse:2.24066#011validation-rmse:3.48511[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[19]#011train-rmse:2.1789#011validation-rmse:3.4407[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[20]#011train-rmse:2.08505#011validation-rmse:3.42545[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[21]#011train-rmse:2.02886#011validation-rmse:3.43828[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[22]#011train-rmse:1.95693#011validation-rmse:3.43134[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[23]#011train-rmse:1.89126#011validation-rmse:3.38539[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[24]#011train-rmse:1.84762#011validation-rmse:3.36507[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[25]#011train-rmse:1.81557#011validation-rmse:3.35717[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[26]#011train-rmse:1.78268#011validation-rmse:3.3355[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[27]#011train-rmse:1.73615#011validation-rmse:3.3308[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[28]#011train-rmse:1.69443#011validation-rmse:3.32833[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[29]#011train-rmse:1.65845#011validation-rmse:3.30043[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[30]#011train-rmse:1.6154#011validation-rmse:3.3089[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[31]#011train-rmse:1.56617#011validation-rmse:3.32911[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[32]#011train-rmse:1.51735#011validation-rmse:3.31193[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[33]#011train-rmse:1.47505#011validation-rmse:3.28774[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[34]#011train-rmse:1.45634#011validation-rmse:3.28651[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[35]#011train-rmse:1.42334#011validation-rmse:3.27541[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[36]#011train-rmse:1.3947#011validation-rmse:3.26806[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[37]#011train-rmse:1.38629#011validation-rmse:3.26626[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[38]#011train-rmse:1.38064#011validation-rmse:3.26908[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[39]#011train-rmse:1.36151#011validation-rmse:3.27405[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[40]#011train-rmse:1.32682#011validation-rmse:3.29127[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[41]#011train-rmse:1.28461#011validation-rmse:3.2774[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 8 pruned nodes, max_depth=3[0m
[34m[42]#011train-rmse:1.27727#011validation-rmse:3.27474[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[43]#011train-rmse:1.26151#011validation-rmse:3.24906[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[44]#011train-rmse:1.2264#011validation-rmse:3.21744[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[45]#011train-rmse:1.19708#011validation-rmse:3.21958[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=4[0m
[34m[46]#011train-rmse:1.1728#011validation-rmse:3.20443[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[47]#011train-rmse:1.14306#011validation-rmse:3.21022[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 8 pruned nodes, max_depth=2[0m
[34m[48]#011train-rmse:1.13807#011validation-rmse:3.20272[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 22 pruned nodes, max_depth=0[0m
[34m[49]#011train-rmse:1.13805#011validation-rmse:3.20289[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 20 pruned nodes, max_depth=4[0m
[34m[50]#011train-rmse:1.11874#011validation-rmse:3.20855[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 16 pruned nodes, max_depth=2[0m
[34m[51]#011train-rmse:1.11202#011validation-rmse:3.21129[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[52]#011train-rmse:1.10509#011validation-rmse:3.21354[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 16 pruned nodes, max_depth=2[0m
[34m[53]#011train-rmse:1.09056#011validation-rmse:3.21037[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 16 pruned nodes, max_depth=4[0m
[34m[54]#011train-rmse:1.07007#011validation-rmse:3.20592[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[55]#011train-rmse:1.05963#011validation-rmse:3.1906[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[56]#011train-rmse:1.05139#011validation-rmse:3.20328[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 18 pruned nodes, max_depth=5[0m
[34m[57]#011train-rmse:1.03617#011validation-rmse:3.18722[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 14 pruned nodes, max_depth=2[0m
[34m[58]#011train-rmse:1.02599#011validation-rmse:3.19119[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 18 pruned nodes, max_depth=2[0m
[34m[59]#011train-rmse:1.0207#011validation-rmse:3.18222[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=4[0m
[34m[60]#011train-rmse:0.999316#011validation-rmse:3.17506[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[61]#011train-rmse:0.982697#011validation-rmse:3.18895[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 24 pruned nodes, max_depth=2[0m
[34m[62]#011train-rmse:0.980357#011validation-rmse:3.19[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 16 pruned nodes, max_depth=4[0m
[34m[63]#011train-rmse:0.961294#011validation-rmse:3.18641[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 18 pruned nodes, max_depth=5[0m
[34m[64]#011train-rmse:0.946977#011validation-rmse:3.17243[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 18 pruned nodes, max_depth=0[0m
[34m[65]#011train-rmse:0.94691#011validation-rmse:3.17263[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[66]#011train-rmse:0.927537#011validation-rmse:3.16717[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 8 pruned nodes, max_depth=3[0m
[34m[67]#011train-rmse:0.91847#011validation-rmse:3.15783[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 18 pruned nodes, max_depth=2[0m
[34m[68]#011train-rmse:0.909934#011validation-rmse:3.15312[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 14 pruned nodes, max_depth=3[0m
[34m[69]#011train-rmse:0.897996#011validation-rmse:3.16648[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 18 pruned nodes, max_depth=0[0m
[34m[70]#011train-rmse:0.89797#011validation-rmse:3.16684[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 16 pruned nodes, max_depth=0[0m
[34m[71]#011train-rmse:0.897982#011validation-rmse:3.16663[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[72]#011train-rmse:0.882783#011validation-rmse:3.15461[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 10 pruned nodes, max_depth=2[0m
[34m[73]#011train-rmse:0.877671#011validation-rmse:3.15547[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=4[0m
[34m[74]#011train-rmse:0.865433#011validation-rmse:3.14371[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 20 pruned nodes, max_depth=0[0m
[34m[75]#011train-rmse:0.86542#011validation-rmse:3.1428[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 14 pruned nodes, max_depth=0[0m
[34m[76]#011train-rmse:0.865411#011validation-rmse:3.1429[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[77]#011train-rmse:0.849927#011validation-rmse:3.14144[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[78]#011train-rmse:0.839832#011validation-rmse:3.14751[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 16 pruned nodes, max_depth=0[0m
[34m[79]#011train-rmse:0.839929#011validation-rmse:3.14686[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[80]#011train-rmse:0.827212#011validation-rmse:3.15361[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 30 pruned nodes, max_depth=0[0m
[34m[81]#011train-rmse:0.8272#011validation-rmse:3.15367[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 22 pruned nodes, max_depth=2[0m
[34m[82]#011train-rmse:0.822933#011validation-rmse:3.15417[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 38 pruned nodes, max_depth=0[0m
[34m[83]#011train-rmse:0.823084#011validation-rmse:3.15369[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 22 pruned nodes, max_depth=0[0m
[34m[84]#011train-rmse:0.823051#011validation-rmse:3.15377[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 16 pruned nodes, max_depth=3[0m
[34m[85]#011train-rmse:0.818507#011validation-rmse:3.1427[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 32 pruned nodes, max_depth=0[0m
[34m[86]#011train-rmse:0.818446#011validation-rmse:3.14286[0m
[34m[11:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 20 pruned nodes, max_depth=0[0m
[34m[87]#011train-rmse:0.81833#011validation-rmse:3.14334[0m
[34mStopping. Best iteration:[0m
[34m[77]#011train-rmse:0.849927#011validation-rmse:3.14144
[0m
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
...................................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
Completed 2.3 KiB/2.3 KiB (35.1 KiB/s) with 1 file(s) remaining
download: s3://sagemaker-eu-central-1-245452871727/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail.
###Code
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
###Output
Collecting sagemaker==1.72.0
Downloading sagemaker-1.72.0.tar.gz (297 kB)
[K |████████████████████████████████| 297 kB 13.8 MB/s eta 0:00:01
[?25hRequirement already satisfied: boto3>=1.14.12 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.16.37)
Requirement already satisfied: numpy>=1.9.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.19.4)
Requirement already satisfied: protobuf>=3.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.14.0)
Requirement already satisfied: scipy>=0.19.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.4.1)
Requirement already satisfied: protobuf3-to-dict>=0.1.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.5)
Requirement already satisfied: importlib-metadata>=1.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.1.0)
Requirement already satisfied: packaging>=20.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (20.7)
Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.3.3)
Requirement already satisfied: botocore<1.20.0,>=1.19.37 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (1.19.37)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.10.0)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.10.0)
Requirement already satisfied: urllib3<1.27,>=1.25.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.20.0,>=1.19.37->boto3>=1.14.12->sagemaker==1.72.0) (1.25.11)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.20.0,>=1.19.37->boto3>=1.14.12->sagemaker==1.72.0) (2.8.1)
Requirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.4.0)
Requirement already satisfied: pyparsing>=2.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging>=20.0->sagemaker==1.72.0) (2.4.7)
Requirement already satisfied: six>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (1.15.0)
Requirement already satisfied: six>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (1.15.0)
Requirement already satisfied: protobuf>=3.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.14.0)
Requirement already satisfied: six>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (1.15.0)
Requirement already satisfied: botocore<1.20.0,>=1.19.37 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (1.19.37)
Requirement already satisfied: numpy>=1.9.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.19.4)
Collecting smdebug-rulesconfig==0.1.4
Downloading smdebug_rulesconfig-0.1.4-py2.py3-none-any.whl (10 kB)
Building wheels for collected packages: sagemaker
Building wheel for sagemaker (setup.py) ... [?25ldone
[?25h Created wheel for sagemaker: filename=sagemaker-1.72.0-py2.py3-none-any.whl size=386358 sha256=357553d9e0f555e4251950085cdc200308d9af0bcc2ae7fb3ec686d6a8c863df
Stored in directory: /home/ec2-user/.cache/pip/wheels/c3/58/70/85faf4437568bfaa4c419937569ba1fe54d44c5db42406bbd7
Successfully built sagemaker
Installing collected packages: smdebug-rulesconfig, sagemaker
Attempting uninstall: smdebug-rulesconfig
Found existing installation: smdebug-rulesconfig 1.0.0
Uninstalling smdebug-rulesconfig-1.0.0:
Successfully uninstalled smdebug-rulesconfig-1.0.0
Attempting uninstall: sagemaker
Found existing installation: sagemaker 2.19.0
Uninstalling sagemaker-2.19.0:
Successfully uninstalled sagemaker-2.19.0
Successfully installed sagemaker-1.72.0 smdebug-rulesconfig-0.1.4
[33mWARNING: You are using pip version 20.3; however, version 20.3.3 is available.
You should consider upgrading via the '/home/ec2-user/anaconda3/envs/pytorch_p36/bin/python -m pip install --upgrade pip' command.[0m
###Markdown
Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
'get_image_uri' method will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='1.0-1'. For example:
get_image_uri(region, 'xgboost', '1.0-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2021-01-14 19:31:13 Starting - Launching requested ML instances......
2021-01-14 19:32:19 Starting - Preparing the instances for training......
2021-01-14 19:33:28 Downloading - Downloading input data
2021-01-14 19:33:28 Training - Downloading the training image..[34mArguments: train[0m
[34m[2021-01-14:19:33:48:INFO] Running standalone xgboost training.[0m
[34m[2021-01-14:19:33:48:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8439.36mb[0m
[34m[2021-01-14:19:33:48:INFO] Determined delimiter of CSV input is ','[0m
[34m[19:33:48] S3DistributionType set as FullyReplicated[0m
[34m[19:33:48] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2021-01-14:19:33:48:INFO] Determined delimiter of CSV input is ','[0m
[34m[19:33:48] S3DistributionType set as FullyReplicated[0m
[34m[19:33:48] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[0]#011train-rmse:20.1923#011validation-rmse:19.9594[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[1]#011train-rmse:16.4654#011validation-rmse:16.168[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[2]#011train-rmse:13.5149#011validation-rmse:13.2798[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[3]#011train-rmse:11.1294#011validation-rmse:11.0008[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[4]#011train-rmse:9.20893#011validation-rmse:9.32069[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[5]#011train-rmse:7.66008#011validation-rmse:7.85713[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[6]#011train-rmse:6.44634#011validation-rmse:6.88376[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[7]#011train-rmse:5.51856#011validation-rmse:6.10812[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[8]#011train-rmse:4.73323#011validation-rmse:5.55834[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[9]#011train-rmse:4.09218#011validation-rmse:5.0439[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[10]#011train-rmse:3.60244#011validation-rmse:4.74105[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[11]#011train-rmse:3.28628#011validation-rmse:4.51091[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[12]#011train-rmse:2.94207#011validation-rmse:4.33658[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[13]#011train-rmse:2.71207#011validation-rmse:4.20595[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14]#011train-rmse:2.473#011validation-rmse:4.09117[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[15]#011train-rmse:2.3303#011validation-rmse:4.03929[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[16]#011train-rmse:2.21599#011validation-rmse:3.98067[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[17]#011train-rmse:2.10607#011validation-rmse:3.94202[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[18]#011train-rmse:2.02197#011validation-rmse:3.90705[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[19]#011train-rmse:1.91127#011validation-rmse:3.85877[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[20]#011train-rmse:1.84211#011validation-rmse:3.8278[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[21]#011train-rmse:1.78758#011validation-rmse:3.80388[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[22]#011train-rmse:1.75182#011validation-rmse:3.80522[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[23]#011train-rmse:1.70056#011validation-rmse:3.77427[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[24]#011train-rmse:1.65685#011validation-rmse:3.78449[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[25]#011train-rmse:1.63166#011validation-rmse:3.79138[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[26]#011train-rmse:1.61294#011validation-rmse:3.79605[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[27]#011train-rmse:1.56947#011validation-rmse:3.7823[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[28]#011train-rmse:1.53615#011validation-rmse:3.78774[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[29]#011train-rmse:1.51971#011validation-rmse:3.79683[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[30]#011train-rmse:1.48306#011validation-rmse:3.77044[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[31]#011train-rmse:1.44567#011validation-rmse:3.76663[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[32]#011train-rmse:1.41149#011validation-rmse:3.78141[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[33]#011train-rmse:1.38407#011validation-rmse:3.75235[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[34]#011train-rmse:1.3448#011validation-rmse:3.72846[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 6 pruned nodes, max_depth=3[0m
[34m[35]#011train-rmse:1.33449#011validation-rmse:3.7332[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[36]#011train-rmse:1.32233#011validation-rmse:3.73291[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[37]#011train-rmse:1.30324#011validation-rmse:3.73753[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[38]#011train-rmse:1.2392#011validation-rmse:3.74427[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[39]#011train-rmse:1.22917#011validation-rmse:3.74493[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[40]#011train-rmse:1.20426#011validation-rmse:3.72517[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[41]#011train-rmse:1.18228#011validation-rmse:3.73118[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 8 pruned nodes, max_depth=2[0m
[34m[42]#011train-rmse:1.17046#011validation-rmse:3.73331[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[43]#011train-rmse:1.15856#011validation-rmse:3.71751[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 14 pruned nodes, max_depth=3[0m
[34m[44]#011train-rmse:1.12368#011validation-rmse:3.71129[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[45]#011train-rmse:1.10405#011validation-rmse:3.71758[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[46]#011train-rmse:1.08336#011validation-rmse:3.71286[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[47]#011train-rmse:1.07172#011validation-rmse:3.72461[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 20 pruned nodes, max_depth=4[0m
[34m[48]#011train-rmse:1.04548#011validation-rmse:3.72036[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 14 pruned nodes, max_depth=0[0m
[34m[49]#011train-rmse:1.04546#011validation-rmse:3.7201[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=4[0m
[34m[50]#011train-rmse:1.03114#011validation-rmse:3.72968[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 12 pruned nodes, max_depth=4[0m
[34m[51]#011train-rmse:1.02411#011validation-rmse:3.73602[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[52]#011train-rmse:1.01006#011validation-rmse:3.74152[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 14 pruned nodes, max_depth=3[0m
[34m[53]#011train-rmse:0.997358#011validation-rmse:3.74899[0m
[34m[19:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 14 pruned nodes, max_depth=1[0m
[34m[54]#011train-rmse:0.99677#011validation-rmse:3.74091[0m
[34mStopping. Best iteration:[0m
[34m[44]#011train-rmse:1.12368#011validation-rmse:3.71129
[0m
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
.........................................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
Completed 2.3 KiB/2.3 KiB (24.8 KiB/s) with 1 file(s) remaining
download: s3://sagemaker-eu-central-1-941012658317/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail.
###Code
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
###Output
Collecting sagemaker==1.72.0
Using cached sagemaker-1.72.0-py2.py3-none-any.whl
Requirement already satisfied: packaging>=20.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (21.3)
Requirement already satisfied: protobuf>=3.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.17.2)
Requirement already satisfied: scipy>=0.19.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.5.3)
Requirement already satisfied: numpy>=1.9.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.19.5)
Collecting smdebug-rulesconfig==0.1.4
Using cached smdebug_rulesconfig-0.1.4-py2.py3-none-any.whl (10 kB)
Requirement already satisfied: protobuf3-to-dict>=0.1.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.5)
Requirement already satisfied: boto3>=1.14.12 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.20.25)
Requirement already satisfied: importlib-metadata>=1.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (4.5.0)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.10.0)
Requirement already satisfied: s3transfer<0.6.0,>=0.5.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.5.0)
Requirement already satisfied: botocore<1.24.0,>=1.23.25 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (1.23.25)
Requirement already satisfied: typing-extensions>=3.6.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.10.0.0)
Requirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.4.1)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging>=20.0->sagemaker==1.72.0) (2.4.7)
Requirement already satisfied: six>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (1.16.0)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.24.0,>=1.23.25->boto3>=1.14.12->sagemaker==1.72.0) (2.8.1)
Requirement already satisfied: urllib3<1.27,>=1.25.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.24.0,>=1.23.25->boto3>=1.14.12->sagemaker==1.72.0) (1.26.5)
Installing collected packages: smdebug-rulesconfig, sagemaker
Attempting uninstall: smdebug-rulesconfig
Found existing installation: smdebug-rulesconfig 1.0.1
Uninstalling smdebug-rulesconfig-1.0.1:
Successfully uninstalled smdebug-rulesconfig-1.0.1
Attempting uninstall: sagemaker
Found existing installation: sagemaker 2.72.1
Uninstalling sagemaker-2.72.1:
Successfully uninstalled sagemaker-2.72.1
Successfully installed sagemaker-1.72.0 smdebug-rulesconfig-0.1.4
###Markdown
Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
'get_image_uri' method will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='1.0-1'. For example:
get_image_uri(region, 'xgboost', '1.0-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
print(training_params)
###Output
{'RoleArn': 'arn:aws:iam::963845225402:role/service-role/AmazonSageMaker-ExecutionRole-20220123T182626', 'AlgorithmSpecification': {'TrainingImage': '811284229777.dkr.ecr.us-east-1.amazonaws.com/xgboost:1', 'TrainingInputMode': 'File'}, 'OutputDataConfig': {'S3OutputPath': 's3://sagemaker-us-east-1-963845225402/boston-xgboost-LL/output'}, 'ResourceConfig': {'InstanceCount': 1, 'InstanceType': 'ml.m4.xlarge', 'VolumeSizeInGB': 5}, 'StoppingCondition': {'MaxRuntimeInSeconds': 86400}, 'HyperParameters': {'max_depth': '5', 'eta': '0.2', 'gamma': '4', 'min_child_weight': '6', 'subsample': '0.8', 'objective': 'reg:linear', 'early_stopping_rounds': '10', 'num_round': '200'}, 'InputDataConfig': [{'ChannelName': 'train', 'DataSource': {'S3DataSource': {'S3DataType': 'S3Prefix', 'S3Uri': 's3://sagemaker-us-east-1-963845225402/boston-xgboost-LL/train.csv', 'S3DataDistributionType': 'FullyReplicated'}}, 'ContentType': 'csv', 'CompressionType': 'None'}, {'ChannelName': 'validation', 'DataSource': {'S3DataSource': {'S3DataType': 'S3Prefix', 'S3Uri': 's3://sagemaker-us-east-1-963845225402/boston-xgboost-LL/validation.csv', 'S3DataDistributionType': 'FullyReplicated'}}, 'ContentType': 'csv', 'CompressionType': 'None'}], 'TrainingJobName': 'boston-xgboost-2022-01-29-19-51-35'}
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
print(transform_request)
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
...........................................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
Completed 3.0 KiB/3.0 KiB (33.9 KiB/s) with 1 file(s) remaining
download: s3://sagemaker-us-east-1-963845225402/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
_____no_output_____
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
_____no_output_____
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
_____no_output_____
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
_____no_output_____
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
_____no_output_____
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2018-12-24 10:25:44 Starting - Launching requested ML instances.........
2018-12-24 10:26:45 Starting - Preparing the instances for training...
2018-12-24 10:27:42 Downloading - Downloading input data...
2018-12-24 10:28:12 Training - Training image download completed. Training in progress.
2018-12-24 10:28:12 Uploading - Uploading generated training model.
[31mArguments: train[0m
[31m[2018-12-24:10:28:10:INFO] Running standalone xgboost training.[0m
[31m[2018-12-24:10:28:10:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8370.41mb[0m
[31m[2018-12-24:10:28:10:INFO] Determined delimiter of CSV input is ','[0m
[31m[10:28:10] S3DistributionType set as FullyReplicated[0m
[31m[10:28:10] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[31m[2018-12-24:10:28:10:INFO] Determined delimiter of CSV input is ','[0m
[31m[10:28:10] S3DistributionType set as FullyReplicated[0m
[31m[10:28:10] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 0 pruned nodes, max_depth=3[0m
[31m[0]#011train-rmse:20.0873#011validation-rmse:19.187[0m
[31mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[31mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=3[0m
[31m[1]#011train-rmse:16.3862#011validation-rmse:15.6498[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[2]#011train-rmse:13.514#011validation-rmse:12.9302[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=4[0m
[31m[3]#011train-rmse:11.1229#011validation-rmse:10.7689[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=4[0m
[31m[4]#011train-rmse:9.28682#011validation-rmse:9.12407[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[5]#011train-rmse:7.71841#011validation-rmse:7.69864[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[6]#011train-rmse:6.47755#011validation-rmse:6.72882[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[7]#011train-rmse:5.54573#011validation-rmse:6.0006[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[8]#011train-rmse:4.79556#011validation-rmse:5.40596[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[9]#011train-rmse:4.23815#011validation-rmse:5.00858[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5[0m
[31m[10]#011train-rmse:3.73375#011validation-rmse:4.66928[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[11]#011train-rmse:3.3596#011validation-rmse:4.46338[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[12]#011train-rmse:3.05883#011validation-rmse:4.27047[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[13]#011train-rmse:2.83087#011validation-rmse:4.14418[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[14]#011train-rmse:2.64985#011validation-rmse:4.08418[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[15]#011train-rmse:2.53003#011validation-rmse:3.97294[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[16]#011train-rmse:2.35601#011validation-rmse:3.87915[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[17]#011train-rmse:2.20624#011validation-rmse:3.8048[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[18]#011train-rmse:2.114#011validation-rmse:3.8039[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[19]#011train-rmse:2.03966#011validation-rmse:3.80222[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[20]#011train-rmse:1.97544#011validation-rmse:3.82326[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[21]#011train-rmse:1.93208#011validation-rmse:3.81683[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5[0m
[31m[22]#011train-rmse:1.87865#011validation-rmse:3.82601[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[23]#011train-rmse:1.83744#011validation-rmse:3.82121[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[24]#011train-rmse:1.8024#011validation-rmse:3.8175[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[25]#011train-rmse:1.77311#011validation-rmse:3.79909[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[26]#011train-rmse:1.73275#011validation-rmse:3.85504[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[27]#011train-rmse:1.65388#011validation-rmse:3.85342[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[28]#011train-rmse:1.55806#011validation-rmse:3.8416[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[29]#011train-rmse:1.51695#011validation-rmse:3.84577[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[30]#011train-rmse:1.48807#011validation-rmse:3.85109[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5[0m
[31m[31]#011train-rmse:1.41759#011validation-rmse:3.83609[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[32]#011train-rmse:1.40029#011validation-rmse:3.84788[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5[0m
[31m[33]#011train-rmse:1.37087#011validation-rmse:3.84048[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[34]#011train-rmse:1.3535#011validation-rmse:3.82955[0m
[31m[10:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5[0m
[31m[35]#011train-rmse:1.30865#011validation-rmse:3.8248[0m
[31mStopping. Best iteration:[0m
[31m[25]#011train-rmse:1.77311#011validation-rmse:3.79909
[0m
2018-12-24 10:28:17 Completed - Training job completed
Billable seconds: 36
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-transform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
.....................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
Completed 2.3 KiB/2.3 KiB (17.7 KiB/s) with 1 file(s) remaining
download: s3://sagemaker-ap-northeast-2-458503936460/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
'get_image_uri' method will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='1.0-1'. For example:
get_image_uri(region, 'xgboost', '1.0-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2020-09-26 12:42:01 Starting - Starting the training job...
2020-09-26 12:42:03 Starting - Launching requested ML instances......
2020-09-26 12:43:30 Starting - Preparing the instances for training......
2020-09-26 12:44:22 Downloading - Downloading input data...
2020-09-26 12:44:45 Training - Downloading the training image.[34mArguments: train[0m
[34m[2020-09-26:12:45:05:INFO] Running standalone xgboost training.[0m
[34m[2020-09-26:12:45:05:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8480.45mb[0m
[34m[2020-09-26:12:45:05:INFO] Determined delimiter of CSV input is ','[0m
[34m[12:45:05] S3DistributionType set as FullyReplicated[0m
[34m[12:45:05] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2020-09-26:12:45:05:INFO] Determined delimiter of CSV input is ','[0m
[34m[12:45:05] S3DistributionType set as FullyReplicated[0m
[34m[12:45:05] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[0]#011train-rmse:19.8188#011validation-rmse:19.4225[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[1]#011train-rmse:16.2242#011validation-rmse:16.0779[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[2]#011train-rmse:13.3642#011validation-rmse:13.2868[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[3]#011train-rmse:11.051#011validation-rmse:11.046[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[4]#011train-rmse:9.24534#011validation-rmse:9.40268[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[5]#011train-rmse:7.76061#011validation-rmse:8.06597[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[6]#011train-rmse:6.58101#011validation-rmse:6.9824[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[7]#011train-rmse:5.63247#011validation-rmse:6.15856[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[8]#011train-rmse:4.8836#011validation-rmse:5.54448[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[9]#011train-rmse:4.29672#011validation-rmse:5.05043[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[10]#011train-rmse:3.83113#011validation-rmse:4.64241[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[11]#011train-rmse:3.51306#011validation-rmse:4.36591[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[12]#011train-rmse:3.23348#011validation-rmse:4.13003[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[13]#011train-rmse:2.99883#011validation-rmse:3.93631[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[14]#011train-rmse:2.82583#011validation-rmse:3.82191[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[15]#011train-rmse:2.72383#011validation-rmse:3.71543[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[16]#011train-rmse:2.60969#011validation-rmse:3.63841[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[17]#011train-rmse:2.54866#011validation-rmse:3.58956[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[18]#011train-rmse:2.47292#011validation-rmse:3.55691[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[19]#011train-rmse:2.37177#011validation-rmse:3.51814[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[20]#011train-rmse:2.29501#011validation-rmse:3.4728[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[21]#011train-rmse:2.24266#011validation-rmse:3.42883[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[22]#011train-rmse:2.17961#011validation-rmse:3.39305[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[23]#011train-rmse:2.15021#011validation-rmse:3.3747[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[24]#011train-rmse:2.08373#011validation-rmse:3.32324[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[25]#011train-rmse:1.99851#011validation-rmse:3.24952[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[26]#011train-rmse:1.96714#011validation-rmse:3.2683[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[27]#011train-rmse:1.93519#011validation-rmse:3.25596[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[28]#011train-rmse:1.88818#011validation-rmse:3.2912[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[29]#011train-rmse:1.82575#011validation-rmse:3.26119[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[30]#011train-rmse:1.7764#011validation-rmse:3.26416[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[31]#011train-rmse:1.73532#011validation-rmse:3.2362[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[32]#011train-rmse:1.6802#011validation-rmse:3.21414[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[33]#011train-rmse:1.63928#011validation-rmse:3.21957[0m
[34m[12:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[34]#011train-rmse:1.61691#011validation-rmse:3.22156[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[35]#011train-rmse:1.59335#011validation-rmse:3.19622[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[36]#011train-rmse:1.57469#011validation-rmse:3.19879[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=4[0m
[34m[37]#011train-rmse:1.5345#011validation-rmse:3.18594[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[38]#011train-rmse:1.52252#011validation-rmse:3.19615[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=4[0m
[34m[39]#011train-rmse:1.48486#011validation-rmse:3.18308[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[40]#011train-rmse:1.47198#011validation-rmse:3.18904[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[41]#011train-rmse:1.45285#011validation-rmse:3.19304[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[42]#011train-rmse:1.43815#011validation-rmse:3.19679[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[43]#011train-rmse:1.41239#011validation-rmse:3.18932[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[44]#011train-rmse:1.40737#011validation-rmse:3.17214[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[45]#011train-rmse:1.35649#011validation-rmse:3.18188[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[46]#011train-rmse:1.33763#011validation-rmse:3.17296[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[47]#011train-rmse:1.32357#011validation-rmse:3.17163[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[48]#011train-rmse:1.31028#011validation-rmse:3.15813[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[49]#011train-rmse:1.29206#011validation-rmse:3.16892[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[50]#011train-rmse:1.27318#011validation-rmse:3.14841[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 8 pruned nodes, max_depth=3[0m
[34m[51]#011train-rmse:1.26242#011validation-rmse:3.14212[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 10 pruned nodes, max_depth=3[0m
[34m[52]#011train-rmse:1.25699#011validation-rmse:3.14786[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[53]#011train-rmse:1.2399#011validation-rmse:3.12796[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 14 pruned nodes, max_depth=4[0m
[34m[54]#011train-rmse:1.21838#011validation-rmse:3.15749[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 4 pruned nodes, max_depth=3[0m
[34m[55]#011train-rmse:1.20411#011validation-rmse:3.16398[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[56]#011train-rmse:1.18331#011validation-rmse:3.14574[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[57]#011train-rmse:1.14193#011validation-rmse:3.13419[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 10 pruned nodes, max_depth=1[0m
[34m[58]#011train-rmse:1.14157#011validation-rmse:3.13529[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 4 pruned nodes, max_depth=3[0m
[34m[59]#011train-rmse:1.13076#011validation-rmse:3.13901[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 10 pruned nodes, max_depth=1[0m
[34m[60]#011train-rmse:1.12944#011validation-rmse:3.13436[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 10 pruned nodes, max_depth=2[0m
[34m[61]#011train-rmse:1.11736#011validation-rmse:3.12119[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 10 pruned nodes, max_depth=3[0m
[34m[62]#011train-rmse:1.11083#011validation-rmse:3.11617[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 12 pruned nodes, max_depth=2[0m
[34m[63]#011train-rmse:1.10034#011validation-rmse:3.12131[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 14 pruned nodes, max_depth=1[0m
[34m[64]#011train-rmse:1.09944#011validation-rmse:3.12703[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 10 pruned nodes, max_depth=2[0m
[34m[65]#011train-rmse:1.09547#011validation-rmse:3.11522[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=3[0m
[34m[66]#011train-rmse:1.06805#011validation-rmse:3.1195[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 6 pruned nodes, max_depth=2[0m
[34m[67]#011train-rmse:1.06438#011validation-rmse:3.11523[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[68]#011train-rmse:1.05785#011validation-rmse:3.12304[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 10 pruned nodes, max_depth=2[0m
[34m[69]#011train-rmse:1.05282#011validation-rmse:3.1329[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[70]#011train-rmse:1.03174#011validation-rmse:3.1306[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 14 pruned nodes, max_depth=4[0m
[34m[71]#011train-rmse:1.01671#011validation-rmse:3.12947[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 20 pruned nodes, max_depth=4[0m
[34m[72]#011train-rmse:1.00919#011validation-rmse:3.13265[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[73]#011train-rmse:1.00259#011validation-rmse:3.11352[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 18 pruned nodes, max_depth=3[0m
[34m[74]#011train-rmse:0.996847#011validation-rmse:3.11704[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[75]#011train-rmse:0.969105#011validation-rmse:3.13468[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[76]#011train-rmse:0.952598#011validation-rmse:3.13399[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 12 pruned nodes, max_depth=0[0m
[34m[77]#011train-rmse:0.952601#011validation-rmse:3.13385[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[78]#011train-rmse:0.941253#011validation-rmse:3.13158[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 22 pruned nodes, max_depth=0[0m
[34m[79]#011train-rmse:0.941312#011validation-rmse:3.13187[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 12 pruned nodes, max_depth=4[0m
[34m[80]#011train-rmse:0.932329#011validation-rmse:3.13616[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 30 pruned nodes, max_depth=2[0m
[34m[81]#011train-rmse:0.929748#011validation-rmse:3.14159[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 10 pruned nodes, max_depth=2[0m
[34m[82]#011train-rmse:0.920885#011validation-rmse:3.13978[0m
[34m[12:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 26 pruned nodes, max_depth=3[0m
[34m[83]#011train-rmse:0.911005#011validation-rmse:3.147[0m
[34mStopping. Best iteration:[0m
[34m[73]#011train-rmse:1.00259#011validation-rmse:3.11352
[0m
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request =
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
...........................................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
Completed 2.3 KiB/2.3 KiB (27.7 KiB/s) with 1 file(s) remaining
download: s3://sagemaker-us-east-1-956613579044/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail.
###Code
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
###Output
Requirement already satisfied: sagemaker==1.72.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (1.72.0)
Requirement already satisfied: scipy>=0.19.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.4.1)
Requirement already satisfied: protobuf3-to-dict>=0.1.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.5)
Requirement already satisfied: smdebug-rulesconfig==0.1.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.4)
Requirement already satisfied: packaging>=20.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (20.8)
Requirement already satisfied: protobuf>=3.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.14.0)
Requirement already satisfied: boto3>=1.14.12 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.16.63)
Requirement already satisfied: importlib-metadata>=1.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.4.0)
Requirement already satisfied: numpy>=1.9.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.19.5)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.10.0)
Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.3.4)
Requirement already satisfied: botocore<1.20.0,>=1.19.63 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (1.19.63)
Requirement already satisfied: urllib3<1.27,>=1.25.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.20.0,>=1.19.63->boto3>=1.14.12->sagemaker==1.72.0) (1.26.2)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.20.0,>=1.19.63->boto3>=1.14.12->sagemaker==1.72.0) (2.8.1)
Requirement already satisfied: typing-extensions>=3.6.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.7.4.3)
Requirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.4.0)
Requirement already satisfied: pyparsing>=2.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging>=20.0->sagemaker==1.72.0) (2.4.7)
Requirement already satisfied: six>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (1.15.0)
[33mWARNING: You are using pip version 20.3.3; however, version 21.0.1 is available.
You should consider upgrading via the '/home/ec2-user/anaconda3/envs/pytorch_p36/bin/python -m pip install --upgrade pip' command.[0m
###Markdown
Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
'get_image_uri' method will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='1.0-1'. For example:
get_image_uri(region, 'xgboost', '1.0-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2021-02-10 22:12:34 Starting - Launching requested ML instances.........
2021-02-10 22:13:40 Starting - Preparing the instances for training......
2021-02-10 22:14:36 Downloading - Downloading input data...
2021-02-10 22:15:35 Training - Training image download completed. Training in progress.
2021-02-10 22:15:35 Uploading - Uploading generated training model.[34mArguments: train[0m
[34m[2021-02-10:22:15:30:INFO] Running standalone xgboost training.[0m
[34m[2021-02-10:22:15:30:INFO] File size need to be processed in the node: 0.03mb. Available memory size in the node: 8452.65mb[0m
[34m[2021-02-10:22:15:30:INFO] Determined delimiter of CSV input is ','[0m
[34m[22:15:30] S3DistributionType set as FullyReplicated[0m
[34m[22:15:30] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2021-02-10:22:15:30:INFO] Determined delimiter of CSV input is ','[0m
[34m[22:15:30] S3DistributionType set as FullyReplicated[0m
[34m[22:15:30] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[0]#011train-rmse:19.2311#011validation-rmse:20.4823[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[1]#011train-rmse:15.6952#011validation-rmse:16.8887[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[2]#011train-rmse:12.8542#011validation-rmse:14.0626[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[3]#011train-rmse:10.5762#011validation-rmse:11.733[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[4]#011train-rmse:8.7619#011validation-rmse:9.9933[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[5]#011train-rmse:7.31548#011validation-rmse:8.59133[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[6]#011train-rmse:6.14557#011validation-rmse:7.43513[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[7]#011train-rmse:5.22276#011validation-rmse:6.74368[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[8]#011train-rmse:4.46188#011validation-rmse:6.09237[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[9]#011train-rmse:3.87797#011validation-rmse:5.6051[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[10]#011train-rmse:3.43204#011validation-rmse:5.27428[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[11]#011train-rmse:3.09601#011validation-rmse:5.04549[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[12]#011train-rmse:2.81947#011validation-rmse:4.7916[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[13]#011train-rmse:2.61265#011validation-rmse:4.68829[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14]#011train-rmse:2.44741#011validation-rmse:4.61101[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[15]#011train-rmse:2.28975#011validation-rmse:4.50908[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[16]#011train-rmse:2.1873#011validation-rmse:4.45057[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[17]#011train-rmse:2.10201#011validation-rmse:4.43728[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[18]#011train-rmse:2.0251#011validation-rmse:4.34972[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[19]#011train-rmse:1.94772#011validation-rmse:4.30601[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[20]#011train-rmse:1.84603#011validation-rmse:4.29919[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[21]#011train-rmse:1.81224#011validation-rmse:4.28436[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[22]#011train-rmse:1.76976#011validation-rmse:4.31284[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[23]#011train-rmse:1.71659#011validation-rmse:4.23787[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[24]#011train-rmse:1.63637#011validation-rmse:4.25167[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[25]#011train-rmse:1.60054#011validation-rmse:4.24818[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[26]#011train-rmse:1.5539#011validation-rmse:4.25385[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[27]#011train-rmse:1.53148#011validation-rmse:4.2318[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[28]#011train-rmse:1.47713#011validation-rmse:4.18891[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[29]#011train-rmse:1.46444#011validation-rmse:4.20555[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[30]#011train-rmse:1.42689#011validation-rmse:4.22405[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[31]#011train-rmse:1.3739#011validation-rmse:4.19202[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[32]#011train-rmse:1.34766#011validation-rmse:4.16896[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[33]#011train-rmse:1.33594#011validation-rmse:4.16418[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[34]#011train-rmse:1.31979#011validation-rmse:4.14719[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[35]#011train-rmse:1.3051#011validation-rmse:4.16859[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 10 pruned nodes, max_depth=4[0m
[34m[36]#011train-rmse:1.29083#011validation-rmse:4.16958[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[37]#011train-rmse:1.26031#011validation-rmse:4.1574[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[38]#011train-rmse:1.24258#011validation-rmse:4.15272[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[39]#011train-rmse:1.22149#011validation-rmse:4.14428[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[40]#011train-rmse:1.21023#011validation-rmse:4.13578[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 12 pruned nodes, max_depth=4[0m
[34m[41]#011train-rmse:1.19874#011validation-rmse:4.11969[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[42]#011train-rmse:1.17447#011validation-rmse:4.10369[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[43]#011train-rmse:1.16862#011validation-rmse:4.10189[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 8 pruned nodes, max_depth=2[0m
[34m[44]#011train-rmse:1.1618#011validation-rmse:4.08526[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[45]#011train-rmse:1.12983#011validation-rmse:4.09062[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[46]#011train-rmse:1.08507#011validation-rmse:4.04647[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[47]#011train-rmse:1.07594#011validation-rmse:4.04705[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[48]#011train-rmse:1.05743#011validation-rmse:4.0339[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 6 pruned nodes, max_depth=4[0m
[34m[49]#011train-rmse:1.04964#011validation-rmse:4.02362[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 4 pruned nodes, max_depth=4[0m
[34m[50]#011train-rmse:1.04381#011validation-rmse:4.03279[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[51]#011train-rmse:1.03314#011validation-rmse:4.03014[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[52]#011train-rmse:1.0076#011validation-rmse:4.01567[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[53]#011train-rmse:0.995148#011validation-rmse:4.00343[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 16 pruned nodes, max_depth=0[0m
[34m[54]#011train-rmse:0.99513#011validation-rmse:4.00335[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 18 pruned nodes, max_depth=3[0m
[34m[55]#011train-rmse:0.976266#011validation-rmse:4.00321[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 20 pruned nodes, max_depth=2[0m
[34m[56]#011train-rmse:0.969279#011validation-rmse:4.00711[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 18 pruned nodes, max_depth=2[0m
[34m[57]#011train-rmse:0.96418#011validation-rmse:3.99266[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 14 pruned nodes, max_depth=4[0m
[34m[58]#011train-rmse:0.950256#011validation-rmse:3.99737[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 18 pruned nodes, max_depth=0[0m
[34m[59]#011train-rmse:0.95023#011validation-rmse:3.99732[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 10 pruned nodes, max_depth=1[0m
[34m[60]#011train-rmse:0.949953#011validation-rmse:4.00773[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 16 pruned nodes, max_depth=1[0m
[34m[61]#011train-rmse:0.948444#011validation-rmse:4.00753[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 28 pruned nodes, max_depth=3[0m
[34m[62]#011train-rmse:0.938847#011validation-rmse:4.01518[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 16 pruned nodes, max_depth=3[0m
[34m[63]#011train-rmse:0.933799#011validation-rmse:4.01188[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 16 pruned nodes, max_depth=4[0m
[34m[64]#011train-rmse:0.925249#011validation-rmse:4.00381[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 26 pruned nodes, max_depth=0[0m
[34m[65]#011train-rmse:0.925251#011validation-rmse:4.00374[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 18 pruned nodes, max_depth=0[0m
[34m[66]#011train-rmse:0.925374#011validation-rmse:4.00347[0m
[34m[22:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 8 pruned nodes, max_depth=4[0m
[34m[67]#011train-rmse:0.918711#011validation-rmse:4.0127[0m
[34mStopping. Best iteration:[0m
[34m[57]#011train-rmse:0.96418#011validation-rmse:3.99266
[0m
2021-02-10 22:15:42 Completed - Training job completed
Training seconds: 66
Billable seconds: 66
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
...........................................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
_____no_output_____
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
_____no_output_____
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
_____no_output_____
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
_____no_output_____
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
_____no_output_____
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail.
###Code
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
###Output
Collecting sagemaker==1.72.0
Downloading sagemaker-1.72.0.tar.gz (297 kB)
[K |████████████████████████████████| 297 kB 2.0 MB/s eta 0:00:01
[?25hRequirement already satisfied: boto3>=1.14.12 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.17.35)
Requirement already satisfied: numpy>=1.9.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.19.5)
Requirement already satisfied: protobuf>=3.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.15.2)
Requirement already satisfied: scipy>=0.19.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.5.3)
Requirement already satisfied: protobuf3-to-dict>=0.1.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.5)
Collecting smdebug-rulesconfig==0.1.4
Downloading smdebug_rulesconfig-0.1.4-py2.py3-none-any.whl (10 kB)
Requirement already satisfied: importlib-metadata>=1.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.7.0)
Requirement already satisfied: packaging>=20.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (20.9)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.10.0)
Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.3.4)
Requirement already satisfied: botocore<1.21.0,>=1.20.35 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (1.20.35)
Requirement already satisfied: urllib3<1.27,>=1.25.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.21.0,>=1.20.35->boto3>=1.14.12->sagemaker==1.72.0) (1.26.3)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.21.0,>=1.20.35->boto3>=1.14.12->sagemaker==1.72.0) (2.8.1)
Requirement already satisfied: typing-extensions>=3.6.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.7.4.3)
Requirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.4.0)
Requirement already satisfied: pyparsing>=2.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging>=20.0->sagemaker==1.72.0) (2.4.7)
Requirement already satisfied: six>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (1.15.0)
Building wheels for collected packages: sagemaker
Building wheel for sagemaker (setup.py) ... [?25ldone
[?25h Created wheel for sagemaker: filename=sagemaker-1.72.0-py2.py3-none-any.whl size=386358 sha256=e08922dffa075881746660c40a51c2ff056507cbd0bbdeda922316e9d78fc57f
Stored in directory: /home/ec2-user/.cache/pip/wheels/c3/58/70/85faf4437568bfaa4c419937569ba1fe54d44c5db42406bbd7
Successfully built sagemaker
Installing collected packages: smdebug-rulesconfig, sagemaker
Attempting uninstall: smdebug-rulesconfig
Found existing installation: smdebug-rulesconfig 1.0.1
Uninstalling smdebug-rulesconfig-1.0.1:
Successfully uninstalled smdebug-rulesconfig-1.0.1
Attempting uninstall: sagemaker
Found existing installation: sagemaker 2.31.1
Uninstalling sagemaker-2.31.1:
Successfully uninstalled sagemaker-2.31.1
Successfully installed sagemaker-1.72.0 smdebug-rulesconfig-0.1.4
###Markdown
Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
'get_image_uri' method will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='1.0-1'. For example:
get_image_uri(region, 'xgboost', '1.0-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2021-04-20 16:37:05 Starting - Launching requested ML instances.........
2021-04-20 16:38:10 Starting - Preparing the instances for training......
2021-04-20 16:39:34 Downloading - Downloading input data...
2021-04-20 16:40:05 Training - Downloading the training image...
2021-04-20 16:40:38 Uploading - Uploading generated training model
2021-04-20 16:40:38 Completed - Training job completed
[34mArguments: train[0m
[34m[2021-04-20:16:40:26:INFO] Running standalone xgboost training.[0m
[34m[2021-04-20:16:40:26:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8421.91mb[0m
[34m[2021-04-20:16:40:26:INFO] Determined delimiter of CSV input is ','[0m
[34m[16:40:26] S3DistributionType set as FullyReplicated[0m
[34m[16:40:26] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2021-04-20:16:40:26:INFO] Determined delimiter of CSV input is ','[0m
[34m[16:40:26] S3DistributionType set as FullyReplicated[0m
[34m[16:40:26] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[16:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[0]#011train-rmse:19.3544#011validation-rmse:18.9613[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[16:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[1]#011train-rmse:15.7937#011validation-rmse:15.4168[0m
[34m[16:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[2]#011train-rmse:12.999#011validation-rmse:12.4933[0m
[34m[16:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[3]#011train-rmse:10.6904#011validation-rmse:10.157[0m
[34m[16:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[4]#011train-rmse:8.9237#011validation-rmse:8.36138[0m
[34m[16:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[5]#011train-rmse:7.48572#011validation-rmse:7.02786[0m
[34m[16:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[6]#011train-rmse:6.31324#011validation-rmse:5.94813[0m
[34m[16:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[7]#011train-rmse:5.37586#011validation-rmse:5.09657[0m
[34m[16:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[8]#011train-rmse:4.61952#011validation-rmse:4.3503[0m
[34m[16:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[9]#011train-rmse:4.02452#011validation-rmse:3.85011[0m
[34m[16:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[10]#011train-rmse:3.58894#011validation-rmse:3.44295[0m
[34m[16:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[11]#011train-rmse:3.22007#011validation-rmse:3.12891[0m
[34m[16:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[12]#011train-rmse:2.91874#011validation-rmse:2.91647[0m
[34m[16:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[13]#011train-rmse:2.69666#011validation-rmse:2.76039[0m
[34m[16:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14]#011train-rmse:2.52873#011validation-rmse:2.68131[0m
[34m[16:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[15]#011train-rmse:2.37425#011validation-rmse:2.61352[0m
[34m[16:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[16]#011train-rmse:2.24411#011validation-rmse:2.58492[0m
[34m[16:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[17]#011train-rmse:2.13914#011validation-rmse:2.57637[0m
[34m[16:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[18]#011train-rmse:2.08504#011validation-rmse:2.57951[0m
[34m[16:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[19]#011train-rmse:2.00727#011validation-rmse:2.56486[0m
[34m[16:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[20]#011train-rmse:1.92068#011validation-rmse:2.58371[0m
[34m[16:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[21]#011train-rmse:1.89517#011validation-rmse:2.56571[0m
[34m[16:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[22]#011train-rmse:1.85608#011validation-rmse:2.57257[0m
[34m[16:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[23]#011train-rmse:1.83671#011validation-rmse:2.58019[0m
[34m[16:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[24]#011train-rmse:1.78803#011validation-rmse:2.59017[0m
[34m[16:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[25]#011train-rmse:1.75715#011validation-rmse:2.59203[0m
[34m[16:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[26]#011train-rmse:1.68562#011validation-rmse:2.59473[0m
[34m[16:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[27]#011train-rmse:1.59945#011validation-rmse:2.57609[0m
[34m[16:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[28]#011train-rmse:1.53908#011validation-rmse:2.56628[0m
[34m[16:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[29]#011train-rmse:1.48873#011validation-rmse:2.57084[0m
[34mStopping. Best iteration:[0m
[34m[19]#011train-rmse:2.00727#011validation-rmse:2.56486
[0m
Training seconds: 64
Billable seconds: 64
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
......................................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
Completed 3.0 KiB/3.0 KiB (58.6 KiB/s) with 1 file(s) remaining
download: s3://sagemaker-ap-south-1-135661043022/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
WARNING:root:There is a more up to date SageMaker XGBoost image.To use the newer image, please set 'repo_version'='0.90-1. For example:
get_image_uri(region, 'xgboost', 0.90-1).
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2019-08-27 21:40:34 Starting - Starting the training job...
2019-08-27 21:40:37 Starting - Launching requested ML instances...
2019-08-27 21:41:33 Starting - Preparing the instances for training......
2019-08-27 21:42:33 Downloading - Downloading input data...
2019-08-27 21:43:05 Training - Downloading the training image...
2019-08-27 21:43:35 Uploading - Uploading generated training model
2019-08-27 21:43:35 Completed - Training job completed
[31mArguments: train[0m
[31m[2019-08-27:21:43:24:INFO] Running standalone xgboost training.[0m
[31m[2019-08-27:21:43:24:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8608.82mb[0m
[31m[2019-08-27:21:43:24:INFO] Determined delimiter of CSV input is ','[0m
[31m[21:43:24] S3DistributionType set as FullyReplicated[0m
[31m[21:43:24] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[31m[2019-08-27:21:43:24:INFO] Determined delimiter of CSV input is ','[0m
[31m[21:43:24] S3DistributionType set as FullyReplicated[0m
[31m[21:43:24] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 0 pruned nodes, max_depth=2[0m
[31m[0]#011train-rmse:18.7446#011validation-rmse:20.3114[0m
[31mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[31mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=4[0m
[31m[1]#011train-rmse:15.3682#011validation-rmse:16.7187[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=4[0m
[31m[2]#011train-rmse:12.7158#011validation-rmse:13.7853[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[3]#011train-rmse:10.5331#011validation-rmse:11.3954[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[4]#011train-rmse:8.83478#011validation-rmse:9.43905[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[5]#011train-rmse:7.42815#011validation-rmse:7.87723[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[6]#011train-rmse:6.26734#011validation-rmse:6.7776[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[7]#011train-rmse:5.37792#011validation-rmse:6.02055[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[8]#011train-rmse:4.63321#011validation-rmse:5.3154[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[9]#011train-rmse:4.0998#011validation-rmse:4.73579[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[10]#011train-rmse:3.66009#011validation-rmse:4.41954[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[11]#011train-rmse:3.27343#011validation-rmse:4.12977[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[12]#011train-rmse:2.94521#011validation-rmse:3.99836[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[13]#011train-rmse:2.72809#011validation-rmse:3.87834[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[14]#011train-rmse:2.56878#011validation-rmse:3.80722[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[15]#011train-rmse:2.44744#011validation-rmse:3.76946[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[16]#011train-rmse:2.35383#011validation-rmse:3.75243[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[17]#011train-rmse:2.27894#011validation-rmse:3.76212[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[18]#011train-rmse:2.18099#011validation-rmse:3.7393[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[19]#011train-rmse:2.10517#011validation-rmse:3.73472[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[20]#011train-rmse:2.04672#011validation-rmse:3.74278[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[21]#011train-rmse:1.99714#011validation-rmse:3.75499[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[22]#011train-rmse:1.94976#011validation-rmse:3.74308[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[23]#011train-rmse:1.90629#011validation-rmse:3.7271[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[24]#011train-rmse:1.87513#011validation-rmse:3.75118[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[25]#011train-rmse:1.84555#011validation-rmse:3.76824[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[26]#011train-rmse:1.79244#011validation-rmse:3.79013[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[27]#011train-rmse:1.74069#011validation-rmse:3.78957[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[28]#011train-rmse:1.66631#011validation-rmse:3.79269[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[29]#011train-rmse:1.58647#011validation-rmse:3.7964[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5[0m
[31m[30]#011train-rmse:1.54195#011validation-rmse:3.79581[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[31]#011train-rmse:1.50808#011validation-rmse:3.77676[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[32]#011train-rmse:1.49174#011validation-rmse:3.76672[0m
[31m[21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[33]#011train-rmse:1.46133#011validation-rmse:3.77876[0m
[31mStopping. Best iteration:[0m
[31m[23]#011train-rmse:1.90629#011validation-rmse:3.7271
[0m
Training seconds: 62
Billable seconds: 62
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
........................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
Completed 2.3 KiB/2.3 KiB (36.7 KiB/s) with 1 file(s) remaining
download: s3://sagemaker-us-east-2-080917825853/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
'get_image_uri' method will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
WARNING:root:There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='1.0-1'. For example:
get_image_uri(region, 'xgboost', '1.0-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2020-07-15 21:20:56 Starting - Preparing the instances for training
2020-07-15 21:20:56 Downloading - Downloading input data
2020-07-15 21:20:56 Training - Training image download completed. Training in progress.
2020-07-15 21:20:56 Uploading - Uploading generated training model
2020-07-15 21:20:56 Completed - Training job completed[34mArguments: train[0m
[34m[2020-07-15:21:20:44:INFO] Running standalone xgboost training.[0m
[34m[2020-07-15:21:20:44:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8486.62mb[0m
[34m[2020-07-15:21:20:44:INFO] Determined delimiter of CSV input is ','[0m
[34m[21:20:44] S3DistributionType set as FullyReplicated[0m
[34m[21:20:44] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2020-07-15:21:20:44:INFO] Determined delimiter of CSV input is ','[0m
[34m[21:20:44] S3DistributionType set as FullyReplicated[0m
[34m[21:20:44] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[0]#011train-rmse:19.3131#011validation-rmse:19.3677[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[1]#011train-rmse:15.7725#011validation-rmse:15.7909[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[2]#011train-rmse:12.9638#011validation-rmse:13.1836[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=4[0m
[34m[3]#011train-rmse:10.6746#011validation-rmse:10.881[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[4]#011train-rmse:8.89508#011validation-rmse:9.08898[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[5]#011train-rmse:7.484#011validation-rmse:7.62423[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[6]#011train-rmse:6.32533#011validation-rmse:6.41224[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[7]#011train-rmse:5.40288#011validation-rmse:5.58798[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[8]#011train-rmse:4.60501#011validation-rmse:4.91192[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[9]#011train-rmse:4.02601#011validation-rmse:4.41009[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[10]#011train-rmse:3.53431#011validation-rmse:4.06278[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[11]#011train-rmse:3.14778#011validation-rmse:3.76259[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[12]#011train-rmse:2.88295#011validation-rmse:3.53324[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[13]#011train-rmse:2.71146#011validation-rmse:3.37197[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14]#011train-rmse:2.57289#011validation-rmse:3.26341[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[15]#011train-rmse:2.41069#011validation-rmse:3.18323[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[16]#011train-rmse:2.33048#011validation-rmse:3.12445[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[17]#011train-rmse:2.20356#011validation-rmse:3.07399[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[18]#011train-rmse:2.1333#011validation-rmse:3.03724[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[19]#011train-rmse:2.09564#011validation-rmse:3.02887[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[20]#011train-rmse:2.03774#011validation-rmse:3.01375[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[21]#011train-rmse:1.99402#011validation-rmse:3.02047[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[22]#011train-rmse:1.963#011validation-rmse:3.01568[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[23]#011train-rmse:1.89907#011validation-rmse:2.99705[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[24]#011train-rmse:1.87026#011validation-rmse:2.99435[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[25]#011train-rmse:1.84251#011validation-rmse:2.98808[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[26]#011train-rmse:1.76376#011validation-rmse:2.97719[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[27]#011train-rmse:1.74317#011validation-rmse:2.98012[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[28]#011train-rmse:1.69456#011validation-rmse:2.98028[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[29]#011train-rmse:1.62058#011validation-rmse:2.95072[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[30]#011train-rmse:1.59493#011validation-rmse:2.96544[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[31]#011train-rmse:1.54787#011validation-rmse:2.92863[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[32]#011train-rmse:1.49938#011validation-rmse:2.9007[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[33]#011train-rmse:1.46365#011validation-rmse:2.89019[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[34]#011train-rmse:1.43576#011validation-rmse:2.87515[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[35]#011train-rmse:1.41387#011validation-rmse:2.866[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[36]#011train-rmse:1.3775#011validation-rmse:2.83455[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[37]#011train-rmse:1.34647#011validation-rmse:2.84426[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[38]#011train-rmse:1.31449#011validation-rmse:2.84333[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[39]#011train-rmse:1.29226#011validation-rmse:2.86315[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[40]#011train-rmse:1.26824#011validation-rmse:2.86945[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[41]#011train-rmse:1.2455#011validation-rmse:2.85772[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[42]#011train-rmse:1.22973#011validation-rmse:2.86418[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[43]#011train-rmse:1.21966#011validation-rmse:2.84275[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 12 pruned nodes, max_depth=3[0m
[34m[44]#011train-rmse:1.21451#011validation-rmse:2.83971[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 10 pruned nodes, max_depth=3[0m
[34m[45]#011train-rmse:1.19419#011validation-rmse:2.84262[0m
[34m[21:20:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[46]#011train-rmse:1.18813#011validation-rmse:2.83644[0m
[34mStopping. Best iteration:[0m
[34m[36]#011train-rmse:1.3775#011validation-rmse:2.83455
[0m
Training seconds: 46
Billable seconds: 46
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
...........................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
_____no_output_____
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost', '0.90-1')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
_____no_output_____
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2020-04-26 17:49:49 Starting - Launching requested ML instances......
2020-04-26 17:50:49 Starting - Preparing the instances for training......
2020-04-26 17:51:47 Downloading - Downloading input data...
2020-04-26 17:52:12 Training - Downloading the training image...
2020-04-26 17:52:49 Uploading - Uploading generated training model
2020-04-26 17:52:49 Completed - Training job completed
[34mINFO:sagemaker-containers:Imported framework sagemaker_xgboost_container.training[0m
[34mINFO:sagemaker-containers:Failed to parse hyperparameter objective value reg:linear to Json.[0m
[34mReturning the value itself[0m
[34mINFO:sagemaker-containers:No GPUs detected (normal if no gpus installed)[0m
[34mINFO:sagemaker_xgboost_container.training:Running XGBoost Sagemaker in algorithm mode[0m
[34mINFO:root:Determined delimiter of CSV input is ','[0m
[34mINFO:root:Determined delimiter of CSV input is ','[0m
[34mINFO:root:Determined delimiter of CSV input is ','[0m
[34m[17:52:38] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34mINFO:root:Determined delimiter of CSV input is ','[0m
[34m[17:52:38] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34mINFO:root:Single node training.[0m
[34mINFO:root:Train matrix has 227 rows[0m
[34mINFO:root:Validation matrix has 112 rows[0m
[34m[17:52:38] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.[0m
[34m[0]#011train-rmse:19.857#011validation-rmse:20.0832[0m
[34m[1]#011train-rmse:16.2128#011validation-rmse:16.3931[0m
[34m[2]#011train-rmse:13.2992#011validation-rmse:13.4946[0m
[34m[3]#011train-rmse:10.9037#011validation-rmse:11.2556[0m
[34m[4]#011train-rmse:8.96658#011validation-rmse:9.3318[0m
[34m[5]#011train-rmse:7.47607#011validation-rmse:7.93902[0m
[34m[6]#011train-rmse:6.33904#011validation-rmse:6.90935[0m
[34m[7]#011train-rmse:5.4345#011validation-rmse:6.13651[0m
[34m[8]#011train-rmse:4.63955#011validation-rmse:5.55331[0m
[34m[9]#011train-rmse:4.06301#011validation-rmse:5.16625[0m
[34m[10]#011train-rmse:3.60145#011validation-rmse:4.98551[0m
[34m[11]#011train-rmse:3.20871#011validation-rmse:4.83409[0m
[34m[12]#011train-rmse:2.90332#011validation-rmse:4.69032[0m
[34m[13]#011train-rmse:2.68747#011validation-rmse:4.59865[0m
[34m[14]#011train-rmse:2.45322#011validation-rmse:4.58851[0m
[34m[15]#011train-rmse:2.30508#011validation-rmse:4.57328[0m
[34m[16]#011train-rmse:2.1798#011validation-rmse:4.60795[0m
[34m[17]#011train-rmse:2.06882#011validation-rmse:4.57974[0m
[34m[18]#011train-rmse:1.98415#011validation-rmse:4.61893[0m
[34m[19]#011train-rmse:1.93782#011validation-rmse:4.61695[0m
[34m[20]#011train-rmse:1.88722#011validation-rmse:4.66425[0m
[34m[21]#011train-rmse:1.84453#011validation-rmse:4.66202[0m
[34m[22]#011train-rmse:1.79772#011validation-rmse:4.65719[0m
[34m[23]#011train-rmse:1.77737#011validation-rmse:4.67169[0m
[34m[24]#011train-rmse:1.74203#011validation-rmse:4.66672[0m
[34m[25]#011train-rmse:1.68991#011validation-rmse:4.66873[0m
Training seconds: 62
Billable seconds: 62
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
..............................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
print(transform_output)
!aws s3 cp --recursive $transform_output $data_dir
###Output
Completed 3.0 KiB/3.0 KiB (53.1 KiB/s) with 1 file(s) remaining
download: s3://sagemaker-us-east-2-037690205935/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail.
###Code
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
###Output
Collecting sagemaker==1.72.0
Downloading sagemaker-1.72.0.tar.gz (297 kB)
[K |████████████████████████████████| 297 kB 41.4 MB/s eta 0:00:01
[?25hRequirement already satisfied: boto3>=1.14.12 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.16.19)
Requirement already satisfied: numpy>=1.9.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.18.1)
Requirement already satisfied: protobuf>=3.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.11.4)
Requirement already satisfied: scipy>=0.19.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.4.1)
Requirement already satisfied: protobuf3-to-dict>=0.1.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.5)
Collecting smdebug-rulesconfig==0.1.4
Downloading smdebug_rulesconfig-0.1.4-py2.py3-none-any.whl (10 kB)
Requirement already satisfied: importlib-metadata>=1.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (2.0.0)
Requirement already satisfied: packaging>=20.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (20.1)
Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.3.3)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.10.0)
Requirement already satisfied: botocore<1.20.0,>=1.19.19 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (1.19.19)
Requirement already satisfied: six>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (1.14.0)
Requirement already satisfied: setuptools in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (45.2.0.post20200210)
Requirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (2.2.0)
Requirement already satisfied: pyparsing>=2.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging>=20.0->sagemaker==1.72.0) (2.4.6)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.20.0,>=1.19.19->boto3>=1.14.12->sagemaker==1.72.0) (2.8.1)
Requirement already satisfied: urllib3<1.27,>=1.25.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.20.0,>=1.19.19->boto3>=1.14.12->sagemaker==1.72.0) (1.25.10)
Building wheels for collected packages: sagemaker
Building wheel for sagemaker (setup.py) ... [?25ldone
[?25h Created wheel for sagemaker: filename=sagemaker-1.72.0-py2.py3-none-any.whl size=386358 sha256=a54fee13acf38a3078ba71a95b0fcce300019dad296f9efd0342d37c1b8351d6
Stored in directory: /home/ec2-user/.cache/pip/wheels/c3/58/70/85faf4437568bfaa4c419937569ba1fe54d44c5db42406bbd7
Successfully built sagemaker
Installing collected packages: smdebug-rulesconfig, sagemaker
Attempting uninstall: smdebug-rulesconfig
Found existing installation: smdebug-rulesconfig 0.1.6
Uninstalling smdebug-rulesconfig-0.1.6:
Successfully uninstalled smdebug-rulesconfig-0.1.6
Attempting uninstall: sagemaker
Found existing installation: sagemaker 2.16.4.dev0
Uninstalling sagemaker-2.16.4.dev0:
Successfully uninstalled sagemaker-2.16.4.dev0
Successfully installed sagemaker-1.72.0 smdebug-rulesconfig-0.1.4
[33mWARNING: You are using pip version 20.0.2; however, version 20.3.3 is available.
You should consider upgrading via the '/home/ec2-user/anaconda3/envs/pytorch_p36/bin/python -m pip install --upgrade pip' command.[0m
###Markdown
Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
'get_image_uri' method will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='1.0-1'. For example:
get_image_uri(region, 'xgboost', '1.0-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2020-12-17 03:42:31 Starting - Starting the training job...
2020-12-17 03:42:33 Starting - Launching requested ML instances......
2020-12-17 03:43:48 Starting - Preparing the instances for training......
2020-12-17 03:44:42 Downloading - Downloading input data...
2020-12-17 03:45:17 Training - Downloading the training image..[34mArguments: train[0m
[34m[2020-12-17:03:45:38:INFO] Running standalone xgboost training.[0m
[34m[2020-12-17:03:45:38:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8446.92mb[0m
[34m[2020-12-17:03:45:38:INFO] Determined delimiter of CSV input is ','[0m
[34m[03:45:38] S3DistributionType set as FullyReplicated[0m
[34m[03:45:38] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2020-12-17:03:45:38:INFO] Determined delimiter of CSV input is ','[0m
[34m[03:45:38] S3DistributionType set as FullyReplicated[0m
[34m[03:45:38] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[0]#011train-rmse:19.7065#011validation-rmse:18.5842[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[1]#011train-rmse:16.1671#011validation-rmse:15.2685[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[2]#011train-rmse:13.2144#011validation-rmse:12.5412[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[3]#011train-rmse:10.8785#011validation-rmse:10.3291[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[4]#011train-rmse:9.08219#011validation-rmse:8.6974[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[5]#011train-rmse:7.59479#011validation-rmse:7.33901[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[6]#011train-rmse:6.39213#011validation-rmse:6.31128[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[7]#011train-rmse:5.48037#011validation-rmse:5.61484[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[8]#011train-rmse:4.70741#011validation-rmse:5.02152[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[9]#011train-rmse:4.15645#011validation-rmse:4.65854[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[10]#011train-rmse:3.69245#011validation-rmse:4.33171[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[11]#011train-rmse:3.34931#011validation-rmse:4.07579[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[12]#011train-rmse:3.04768#011validation-rmse:3.8819[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[13]#011train-rmse:2.78618#011validation-rmse:3.72065[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14]#011train-rmse:2.63808#011validation-rmse:3.70509[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[15]#011train-rmse:2.50284#011validation-rmse:3.6421[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[16]#011train-rmse:2.37132#011validation-rmse:3.57954[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[17]#011train-rmse:2.29844#011validation-rmse:3.56863[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[18]#011train-rmse:2.19925#011validation-rmse:3.49482[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[19]#011train-rmse:2.13312#011validation-rmse:3.4643[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[20]#011train-rmse:2.07915#011validation-rmse:3.45007[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[21]#011train-rmse:2.01566#011validation-rmse:3.42334[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[22]#011train-rmse:1.96578#011validation-rmse:3.37898[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[23]#011train-rmse:1.9326#011validation-rmse:3.37188[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[24]#011train-rmse:1.89971#011validation-rmse:3.34891[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[25]#011train-rmse:1.79644#011validation-rmse:3.36891[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[26]#011train-rmse:1.75075#011validation-rmse:3.32175[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[27]#011train-rmse:1.65705#011validation-rmse:3.2801[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[28]#011train-rmse:1.61417#011validation-rmse:3.28417[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[29]#011train-rmse:1.58175#011validation-rmse:3.30193[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[30]#011train-rmse:1.5509#011validation-rmse:3.32142[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[31]#011train-rmse:1.51497#011validation-rmse:3.3001[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[32]#011train-rmse:1.45906#011validation-rmse:3.27012[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[33]#011train-rmse:1.4385#011validation-rmse:3.24937[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[34]#011train-rmse:1.40865#011validation-rmse:3.22099[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[35]#011train-rmse:1.39844#011validation-rmse:3.23563[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=4[0m
[34m[36]#011train-rmse:1.38464#011validation-rmse:3.22714[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[37]#011train-rmse:1.36937#011validation-rmse:3.22625[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[38]#011train-rmse:1.36435#011validation-rmse:3.23085[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[39]#011train-rmse:1.34749#011validation-rmse:3.21578[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[40]#011train-rmse:1.29011#011validation-rmse:3.19104[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[41]#011train-rmse:1.26507#011validation-rmse:3.17801[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 6 pruned nodes, max_depth=3[0m
[34m[42]#011train-rmse:1.25664#011validation-rmse:3.17489[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[43]#011train-rmse:1.23078#011validation-rmse:3.15502[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[44]#011train-rmse:1.20449#011validation-rmse:3.14924[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 16 pruned nodes, max_depth=2[0m
[34m[45]#011train-rmse:1.20568#011validation-rmse:3.15654[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[46]#011train-rmse:1.18716#011validation-rmse:3.15701[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 10 pruned nodes, max_depth=1[0m
[34m[47]#011train-rmse:1.1816#011validation-rmse:3.14747[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[48]#011train-rmse:1.1653#011validation-rmse:3.13868[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[49]#011train-rmse:1.14584#011validation-rmse:3.13437[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 8 pruned nodes, max_depth=4[0m
[34m[50]#011train-rmse:1.13731#011validation-rmse:3.12423[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 20 pruned nodes, max_depth=3[0m
[34m[51]#011train-rmse:1.11298#011validation-rmse:3.11808[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 12 pruned nodes, max_depth=4[0m
[34m[52]#011train-rmse:1.09732#011validation-rmse:3.10818[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 14 pruned nodes, max_depth=4[0m
[34m[53]#011train-rmse:1.06984#011validation-rmse:3.12214[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[54]#011train-rmse:1.04907#011validation-rmse:3.12844[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 20 pruned nodes, max_depth=4[0m
[34m[55]#011train-rmse:1.03942#011validation-rmse:3.11977[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[56]#011train-rmse:1.02553#011validation-rmse:3.11341[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[57]#011train-rmse:1.00393#011validation-rmse:3.11076[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 18 pruned nodes, max_depth=1[0m
[34m[58]#011train-rmse:1.00433#011validation-rmse:3.1192[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 20 pruned nodes, max_depth=2[0m
[34m[59]#011train-rmse:0.996355#011validation-rmse:3.11243[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 22 pruned nodes, max_depth=3[0m
[34m[60]#011train-rmse:0.97763#011validation-rmse:3.11046[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 26 pruned nodes, max_depth=1[0m
[34m[61]#011train-rmse:0.977801#011validation-rmse:3.11732[0m
[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 22 pruned nodes, max_depth=0[0m
[34m[62]#011train-rmse:0.977827#011validation-rmse:3.11705[0m
[34mStopping. Best iteration:[0m
[34m[52]#011train-rmse:1.09732#011validation-rmse:3.10818
[0m
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
..........................................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
Completed 2.3 KiB/2.3 KiB (28.7 KiB/s) with 1 file(s) remaining
download: s3://sagemaker-us-east-1-302590777472/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
WARNING:root:There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='0.90-1'. For example:
get_image_uri(region, 'xgboost', '0.90-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
_____no_output_____
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
_____no_output_____
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
_____no_output_____
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
'get_image_uri' method will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='1.0-1'. For example:
get_image_uri(region, 'xgboost', '1.0-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2020-09-25 14:16:29 Starting - Launching requested ML instances......
2020-09-25 14:17:37 Starting - Preparing the instances for training......
2020-09-25 14:18:25 Downloading - Downloading input data...
2020-09-25 14:18:44 Training - Downloading the training image.[34mArguments: train[0m
[34m[2020-09-25:14:19:04:INFO] Running standalone xgboost training.[0m
[34m[2020-09-25:14:19:04:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8470.47mb[0m
[34m[2020-09-25:14:19:04:INFO] Determined delimiter of CSV input is ','[0m
[34m[14:19:04] S3DistributionType set as FullyReplicated[0m
[34m[14:19:04] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2020-09-25:14:19:04:INFO] Determined delimiter of CSV input is ','[0m
[34m[14:19:04] S3DistributionType set as FullyReplicated[0m
[34m[14:19:04] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[0]#011train-rmse:19.3026#011validation-rmse:18.8207[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[1]#011train-rmse:15.7897#011validation-rmse:15.3731[0m
[34m[2]#011train-rmse:12.9123#011validation-rmse:12.687[0m
[34m[3]#011train-rmse:10.605#011validation-rmse:10.5764[0m
[34m[4]#011train-rmse:8.75906#011validation-rmse:8.95235[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[5]#011train-rmse:7.30053#011validation-rmse:7.76143[0m
[34m[6]#011train-rmse:6.14763#011validation-rmse:6.83042[0m
[34m[7]#011train-rmse:5.22542#011validation-rmse:6.22573[0m
[34m[8]#011train-rmse:4.48973#011validation-rmse:5.73064[0m
[34m[9]#011train-rmse:3.91534#011validation-rmse:5.35467[0m
[34m[10]#011train-rmse:3.48694#011validation-rmse:5.11529[0m
[34m[11]#011train-rmse:3.1468#011validation-rmse:4.93854[0m
[34m[12]#011train-rmse:2.87099#011validation-rmse:4.83464[0m
[34m[13]#011train-rmse:2.69133#011validation-rmse:4.75837[0m
[34m[14]#011train-rmse:2.54452#011validation-rmse:4.68176[0m
[34m[15]#011train-rmse:2.42523#011validation-rmse:4.65465[0m
[34m[16]#011train-rmse:2.31566#011validation-rmse:4.63384[0m
[34m[17]#011train-rmse:2.17176#011validation-rmse:4.63517[0m
[34m[18]#011train-rmse:2.09744#011validation-rmse:4.58876[0m
[34m[19]#011train-rmse:2.03987#011validation-rmse:4.61157[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[20]#011train-rmse:1.95711#011validation-rmse:4.54262[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[21]#011train-rmse:1.91098#011validation-rmse:4.52819[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[22]#011train-rmse:1.82428#011validation-rmse:4.46005[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[23]#011train-rmse:1.78544#011validation-rmse:4.43534[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[24]#011train-rmse:1.73691#011validation-rmse:4.35008[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[25]#011train-rmse:1.70083#011validation-rmse:4.32845[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[26]#011train-rmse:1.65479#011validation-rmse:4.32952[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[27]#011train-rmse:1.59494#011validation-rmse:4.32859[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[28]#011train-rmse:1.56743#011validation-rmse:4.27241[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[29]#011train-rmse:1.55706#011validation-rmse:4.29764[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 6 pruned nodes, max_depth=2[0m
[34m[30]#011train-rmse:1.55#011validation-rmse:4.27712[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[31]#011train-rmse:1.49524#011validation-rmse:4.28749[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=4[0m
[34m[32]#011train-rmse:1.47208#011validation-rmse:4.25974[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[33]#011train-rmse:1.4543#011validation-rmse:4.24212[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[34]#011train-rmse:1.41346#011validation-rmse:4.23453[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 6 pruned nodes, max_depth=3[0m
[34m[35]#011train-rmse:1.38142#011validation-rmse:4.19044[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[36]#011train-rmse:1.3472#011validation-rmse:4.18607[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 10 pruned nodes, max_depth=4[0m
[34m[37]#011train-rmse:1.32258#011validation-rmse:4.1569[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 10 pruned nodes, max_depth=4[0m
[34m[38]#011train-rmse:1.30757#011validation-rmse:4.15178[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[39]#011train-rmse:1.2776#011validation-rmse:4.15051[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[40]#011train-rmse:1.25532#011validation-rmse:4.12735[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[41]#011train-rmse:1.24366#011validation-rmse:4.12647[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 10 pruned nodes, max_depth=1[0m
[34m[42]#011train-rmse:1.23541#011validation-rmse:4.11076[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[43]#011train-rmse:1.20841#011validation-rmse:4.08887[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[44]#011train-rmse:1.18799#011validation-rmse:4.05191[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 16 pruned nodes, max_depth=3[0m
[34m[45]#011train-rmse:1.17547#011validation-rmse:4.04239[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=4[0m
[34m[46]#011train-rmse:1.15438#011validation-rmse:4.04319[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 12 pruned nodes, max_depth=2[0m
[34m[47]#011train-rmse:1.14776#011validation-rmse:4.03192[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 18 pruned nodes, max_depth=4[0m
[34m[48]#011train-rmse:1.13513#011validation-rmse:4.02094[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[49]#011train-rmse:1.12721#011validation-rmse:4.02718[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[50]#011train-rmse:1.116#011validation-rmse:4.01627[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 12 pruned nodes, max_depth=4[0m
[34m[51]#011train-rmse:1.09528#011validation-rmse:3.98664[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[52]#011train-rmse:1.08917#011validation-rmse:3.99214[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 10 pruned nodes, max_depth=4[0m
[34m[53]#011train-rmse:1.08403#011validation-rmse:3.98462[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 10 pruned nodes, max_depth=4[0m
[34m[54]#011train-rmse:1.05489#011validation-rmse:3.98299[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 10 pruned nodes, max_depth=3[0m
[34m[55]#011train-rmse:1.04091#011validation-rmse:3.96258[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 16 pruned nodes, max_depth=1[0m
[34m[56]#011train-rmse:1.03877#011validation-rmse:3.95543[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 18 pruned nodes, max_depth=2[0m
[34m[57]#011train-rmse:1.0294#011validation-rmse:3.9647[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 12 pruned nodes, max_depth=4[0m
[34m[58]#011train-rmse:1.0161#011validation-rmse:3.94334[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 16 pruned nodes, max_depth=0[0m
[34m[59]#011train-rmse:1.0161#011validation-rmse:3.94347[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 22 pruned nodes, max_depth=2[0m
[34m[60]#011train-rmse:1.00374#011validation-rmse:3.93798[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 12 pruned nodes, max_depth=3[0m
[34m[61]#011train-rmse:0.994701#011validation-rmse:3.93634[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 20 pruned nodes, max_depth=2[0m
[34m[62]#011train-rmse:0.989821#011validation-rmse:3.93452[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 18 pruned nodes, max_depth=2[0m
[34m[63]#011train-rmse:0.983868#011validation-rmse:3.92101[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 14 pruned nodes, max_depth=1[0m
[34m[64]#011train-rmse:0.985067#011validation-rmse:3.92646[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[65]#011train-rmse:0.975781#011validation-rmse:3.92662[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 14 pruned nodes, max_depth=1[0m
[34m[66]#011train-rmse:0.978054#011validation-rmse:3.93281[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 6 pruned nodes, max_depth=4[0m
[34m[67]#011train-rmse:0.968946#011validation-rmse:3.92875[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 10 pruned nodes, max_depth=3[0m
[34m[68]#011train-rmse:0.963615#011validation-rmse:3.91392[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 12 pruned nodes, max_depth=0[0m
[34m[69]#011train-rmse:0.963625#011validation-rmse:3.91401[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 12 pruned nodes, max_depth=1[0m
[34m[70]#011train-rmse:0.965249#011validation-rmse:3.91957[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 22 pruned nodes, max_depth=0[0m
[34m[71]#011train-rmse:0.965135#011validation-rmse:3.91932[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 16 pruned nodes, max_depth=2[0m
[34m[72]#011train-rmse:0.951431#011validation-rmse:3.891[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 18 pruned nodes, max_depth=1[0m
[34m[73]#011train-rmse:0.949508#011validation-rmse:3.88594[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 12 pruned nodes, max_depth=3[0m
[34m[74]#011train-rmse:0.938006#011validation-rmse:3.88545[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[75]#011train-rmse:0.922393#011validation-rmse:3.86565[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 26 pruned nodes, max_depth=0[0m
[34m[76]#011train-rmse:0.922427#011validation-rmse:3.8659[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 22 pruned nodes, max_depth=3[0m
[34m[77]#011train-rmse:0.909516#011validation-rmse:3.8554[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 18 pruned nodes, max_depth=2[0m
[34m[78]#011train-rmse:0.906608#011validation-rmse:3.84918[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 16 pruned nodes, max_depth=0[0m
[34m[79]#011train-rmse:0.906622#011validation-rmse:3.84906[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 10 pruned nodes, max_depth=1[0m
[34m[80]#011train-rmse:0.905487#011validation-rmse:3.84467[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 14 pruned nodes, max_depth=0[0m
[34m[81]#011train-rmse:0.905456#011validation-rmse:3.84475[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 18 pruned nodes, max_depth=4[0m
[34m[82]#011train-rmse:0.888487#011validation-rmse:3.86496[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 18 pruned nodes, max_depth=3[0m
[34m[83]#011train-rmse:0.878274#011validation-rmse:3.87801[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[84]#011train-rmse:0.867755#011validation-rmse:3.8593[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 12 pruned nodes, max_depth=0[0m
[34m[85]#011train-rmse:0.867763#011validation-rmse:3.85921[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 12 pruned nodes, max_depth=4[0m
[34m[86]#011train-rmse:0.860938#011validation-rmse:3.85567[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 12 pruned nodes, max_depth=0[0m
[34m[87]#011train-rmse:0.860829#011validation-rmse:3.85579[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 20 pruned nodes, max_depth=1[0m
[34m[88]#011train-rmse:0.861011#011validation-rmse:3.86203[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 18 pruned nodes, max_depth=0[0m
[34m[89]#011train-rmse:0.861024#011validation-rmse:3.86221[0m
[34m[14:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 16 pruned nodes, max_depth=0[0m
[34m[90]#011train-rmse:0.861062#011validation-rmse:3.86186[0m
[34mStopping. Best iteration:[0m
[34m[80]#011train-rmse:0.905487#011validation-rmse:3.84467
[0m
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
...........................................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
Completed 2.3 KiB/2.3 KiB (32.3 KiB/s) with 1 file(s) remaining
download: s3://sagemaker-us-east-2-428747017283/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
_____no_output_____
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
_____no_output_____
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
_____no_output_____
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
_____no_output_____
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail.
###Code
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
###Output
Collecting sagemaker==1.72.0
Using cached sagemaker-1.72.0-py2.py3-none-any.whl
Requirement already satisfied: boto3>=1.14.12 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.17.16)
Requirement already satisfied: importlib-metadata>=1.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.4.0)
Collecting smdebug-rulesconfig==0.1.4
Using cached smdebug_rulesconfig-0.1.4-py2.py3-none-any.whl (10 kB)
Requirement already satisfied: protobuf>=3.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.11.4)
Requirement already satisfied: numpy>=1.9.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.18.1)
Requirement already satisfied: protobuf3-to-dict>=0.1.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.5)
Requirement already satisfied: scipy>=0.19.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.4.1)
Requirement already satisfied: packaging>=20.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (20.1)
Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.3.4)
Requirement already satisfied: botocore<1.21.0,>=1.20.16 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (1.20.16)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.10.0)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.21.0,>=1.20.16->boto3>=1.14.12->sagemaker==1.72.0) (2.8.1)
Requirement already satisfied: urllib3<1.27,>=1.25.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.21.0,>=1.20.16->boto3>=1.14.12->sagemaker==1.72.0) (1.25.10)
Requirement already satisfied: typing-extensions>=3.6.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.7.4.3)
Requirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (2.2.0)
Requirement already satisfied: pyparsing>=2.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging>=20.0->sagemaker==1.72.0) (2.4.6)
Requirement already satisfied: six in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging>=20.0->sagemaker==1.72.0) (1.14.0)
Requirement already satisfied: setuptools in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (45.2.0.post20200210)
Installing collected packages: smdebug-rulesconfig, sagemaker
Attempting uninstall: smdebug-rulesconfig
Found existing installation: smdebug-rulesconfig 1.0.1
Uninstalling smdebug-rulesconfig-1.0.1:
Successfully uninstalled smdebug-rulesconfig-1.0.1
Attempting uninstall: sagemaker
Found existing installation: sagemaker 2.25.2
Uninstalling sagemaker-2.25.2:
Successfully uninstalled sagemaker-2.25.2
Successfully installed sagemaker-1.72.0 smdebug-rulesconfig-0.1.4
###Markdown
Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
'get_image_uri' method will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='1.0-1'. For example:
get_image_uri(region, 'xgboost', '1.0-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2021-03-06 22:06:56 Starting - Launching requested ML instances......
2021-03-06 22:07:59 Starting - Preparing the instances for training......
2021-03-06 22:09:13 Downloading - Downloading input data
2021-03-06 22:09:13 Training - Downloading the training image...
2021-03-06 22:09:39 Uploading - Uploading generated training model[34mArguments: train[0m
[34m[2021-03-06:22:09:34:INFO] Running standalone xgboost training.[0m
[34m[2021-03-06:22:09:34:INFO] File size need to be processed in the node: 0.03mb. Available memory size in the node: 8447.92mb[0m
[34m[2021-03-06:22:09:34:INFO] Determined delimiter of CSV input is ','[0m
[34m[22:09:34] S3DistributionType set as FullyReplicated[0m
[34m[22:09:34] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2021-03-06:22:09:34:INFO] Determined delimiter of CSV input is ','[0m
[34m[22:09:34] S3DistributionType set as FullyReplicated[0m
[34m[22:09:34] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 2 pruned nodes, max_depth=2[0m
[34m[0]#011train-rmse:19.0133#011validation-rmse:20.4131[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[1]#011train-rmse:15.563#011validation-rmse:16.77[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[2]#011train-rmse:12.7543#011validation-rmse:13.885[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[3]#011train-rmse:10.585#011validation-rmse:11.7265[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[4]#011train-rmse:8.81173#011validation-rmse:9.73879[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[5]#011train-rmse:7.4135#011validation-rmse:8.3298[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[6]#011train-rmse:6.2396#011validation-rmse:7.17067[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[7]#011train-rmse:5.32691#011validation-rmse:6.35407[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[8]#011train-rmse:4.56633#011validation-rmse:5.6986[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[9]#011train-rmse:3.9854#011validation-rmse:5.19529[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[10]#011train-rmse:3.57145#011validation-rmse:4.90821[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[11]#011train-rmse:3.21947#011validation-rmse:4.59762[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[12]#011train-rmse:2.94027#011validation-rmse:4.41136[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[13]#011train-rmse:2.73156#011validation-rmse:4.28288[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[14]#011train-rmse:2.53491#011validation-rmse:4.20839[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[15]#011train-rmse:2.39359#011validation-rmse:4.15878[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[16]#011train-rmse:2.2927#011validation-rmse:4.12305[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[17]#011train-rmse:2.21078#011validation-rmse:4.0841[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[18]#011train-rmse:2.14364#011validation-rmse:4.04321[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[19]#011train-rmse:2.03457#011validation-rmse:4.01935[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[20]#011train-rmse:2.00659#011validation-rmse:4.00129[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[21]#011train-rmse:1.90176#011validation-rmse:3.94567[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[22]#011train-rmse:1.85004#011validation-rmse:3.91591[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[23]#011train-rmse:1.81268#011validation-rmse:3.90733[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[24]#011train-rmse:1.75597#011validation-rmse:3.89994[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[25]#011train-rmse:1.71317#011validation-rmse:3.86929[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[26]#011train-rmse:1.6497#011validation-rmse:3.84161[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[27]#011train-rmse:1.60305#011validation-rmse:3.8319[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[28]#011train-rmse:1.56476#011validation-rmse:3.83971[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[29]#011train-rmse:1.52867#011validation-rmse:3.84427[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[30]#011train-rmse:1.49985#011validation-rmse:3.85014[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[31]#011train-rmse:1.47527#011validation-rmse:3.84552[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=4[0m
[34m[32]#011train-rmse:1.44001#011validation-rmse:3.82954[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 2 pruned nodes, max_depth=4[0m
[34m[33]#011train-rmse:1.43573#011validation-rmse:3.82847[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[34]#011train-rmse:1.39624#011validation-rmse:3.84336[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[35]#011train-rmse:1.34574#011validation-rmse:3.85026[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[36]#011train-rmse:1.30678#011validation-rmse:3.83752[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[37]#011train-rmse:1.26443#011validation-rmse:3.86382[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[38]#011train-rmse:1.24448#011validation-rmse:3.8519[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 14 pruned nodes, max_depth=4[0m
[34m[39]#011train-rmse:1.21439#011validation-rmse:3.83985[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[40]#011train-rmse:1.18818#011validation-rmse:3.85667[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=4[0m
[34m[41]#011train-rmse:1.16266#011validation-rmse:3.84795[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[42]#011train-rmse:1.13203#011validation-rmse:3.86348[0m
[34m[22:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[43]#011train-rmse:1.10376#011validation-rmse:3.85766[0m
[34mStopping. Best iteration:[0m
[34m[33]#011train-rmse:1.43573#011validation-rmse:3.82847
[0m
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
..........................................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
Completed 2.3 KiB/2.3 KiB (36.0 KiB/s) with 1 file(s) remaining
download: s3://sagemaker-eu-west-2-266442167964/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
_____no_output_____
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2019-05-13 19:20:48 Starting - Starting the training job...
2019-05-13 19:20:49 Starting - Launching requested ML instances......
2019-05-13 19:21:50 Starting - Preparing the instances for training......
2019-05-13 19:22:57 Downloading - Downloading input data..
[31mArguments: train[0m
[31m[2019-05-13:19:23:29:INFO] Running standalone xgboost training.[0m
[31m[2019-05-13:19:23:29:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8408.31mb[0m
[31m[2019-05-13:19:23:29:INFO] Determined delimiter of CSV input is ','[0m
[31m[19:23:29] S3DistributionType set as FullyReplicated[0m
[31m[19:23:29] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[31m[2019-05-13:19:23:29:INFO] Determined delimiter of CSV input is ','[0m
[31m[19:23:29] S3DistributionType set as FullyReplicated[0m
[31m[19:23:29] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=3[0m
[31m[0]#011train-rmse:19.1877#011validation-rmse:20.8109[0m
[31mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[31mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=4[0m
[31m[1]#011train-rmse:15.6164#011validation-rmse:17.0394[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=4[0m
[31m[2]#011train-rmse:12.87#011validation-rmse:14.065[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[3]#011train-rmse:10.6443#011validation-rmse:11.7172[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[4]#011train-rmse:8.84818#011validation-rmse:9.77126[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[5]#011train-rmse:7.48938#011validation-rmse:8.3451[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[6]#011train-rmse:6.38541#011validation-rmse:7.22537[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[7]#011train-rmse:5.49051#011validation-rmse:6.32009[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[8]#011train-rmse:4.72935#011validation-rmse:5.58608[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[9]#011train-rmse:4.11245#011validation-rmse:5.02404[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[10]#011train-rmse:3.70679#011validation-rmse:4.66192[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[11]#011train-rmse:3.31103#011validation-rmse:4.30804[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[12]#011train-rmse:3.0436#011validation-rmse:4.06519[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[13]#011train-rmse:2.85228#011validation-rmse:3.87974[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[14]#011train-rmse:2.6142#011validation-rmse:3.72517[0m
[31m[15]#011train-rmse:2.46766#011validation-rmse:3.60172[0m
[31m[16]#011train-rmse:2.37278#011validation-rmse:3.50086[0m
[31m[17]#011train-rmse:2.29178#011validation-rmse:3.36304[0m
[31m[18]#011train-rmse:2.23289#011validation-rmse:3.30724[0m
[31m[19]#011train-rmse:2.17642#011validation-rmse:3.28215[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[20]#011train-rmse:2.13221#011validation-rmse:3.29723[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[21]#011train-rmse:2.07872#011validation-rmse:3.22863[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[22]#011train-rmse:2.05821#011validation-rmse:3.19409[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[23]#011train-rmse:2.00547#011validation-rmse:3.12429[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[24]#011train-rmse:1.97759#011validation-rmse:3.14733[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[25]#011train-rmse:1.91685#011validation-rmse:3.17038[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5[0m
[31m[26]#011train-rmse:1.81581#011validation-rmse:3.13649[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[27]#011train-rmse:1.77877#011validation-rmse:3.14364[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[28]#011train-rmse:1.74731#011validation-rmse:3.12877[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[31m[29]#011train-rmse:1.65333#011validation-rmse:3.11497[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5[0m
[31m[30]#011train-rmse:1.59363#011validation-rmse:3.0535[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 4 pruned nodes, max_depth=5[0m
[31m[31]#011train-rmse:1.52059#011validation-rmse:3.00312[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5[0m
[31m[32]#011train-rmse:1.48754#011validation-rmse:3.02415[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[33]#011train-rmse:1.47963#011validation-rmse:3.03443[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5[0m
[31m[34]#011train-rmse:1.42069#011validation-rmse:3.0296[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5[0m
[31m[35]#011train-rmse:1.38305#011validation-rmse:3.03351[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5[0m
[31m[36]#011train-rmse:1.36481#011validation-rmse:3.00463[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[31m[37]#011train-rmse:1.35006#011validation-rmse:3.01029[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[31m[38]#011train-rmse:1.31635#011validation-rmse:3.01064[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5[0m
[31m[39]#011train-rmse:1.30499#011validation-rmse:3.00363[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[31m[40]#011train-rmse:1.28969#011validation-rmse:3.00843[0m
[31m[19:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[31m[41]#011train-rmse:1.26204#011validation-rmse:3.02095[0m
[31mStopping. Best iteration:[0m
[31m[31]#011train-rmse:1.52059#011validation-rmse:3.00312
[0m
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
.........................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
Completed 2.3 KiB/2.3 KiB (38.8 KiB/s) with 1 file(s) remaining
download: s3://sagemaker-us-east-2-129722534204/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail.
###Code
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
###Output
Collecting sagemaker==1.72.0
Downloading sagemaker-1.72.0.tar.gz (297 kB)
[K |████████████████████████████████| 297 kB 13.8 MB/s eta 0:00:01
[?25hRequirement already satisfied: boto3>=1.14.12 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.16.19)
Requirement already satisfied: numpy>=1.9.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.18.1)
Requirement already satisfied: protobuf>=3.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.11.4)
Requirement already satisfied: scipy>=0.19.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.4.1)
Requirement already satisfied: protobuf3-to-dict>=0.1.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.5)
Collecting smdebug-rulesconfig==0.1.4
Downloading smdebug_rulesconfig-0.1.4-py2.py3-none-any.whl (10 kB)
Requirement already satisfied: importlib-metadata>=1.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (2.0.0)
Requirement already satisfied: packaging>=20.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (20.1)
Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.3.3)
Requirement already satisfied: botocore<1.20.0,>=1.19.19 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (1.19.19)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.10.0)
Requirement already satisfied: six>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (1.14.0)
Requirement already satisfied: setuptools in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (45.2.0.post20200210)
Requirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (2.2.0)
Requirement already satisfied: pyparsing>=2.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging>=20.0->sagemaker==1.72.0) (2.4.6)
Requirement already satisfied: urllib3<1.27,>=1.25.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.20.0,>=1.19.19->boto3>=1.14.12->sagemaker==1.72.0) (1.25.10)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.20.0,>=1.19.19->boto3>=1.14.12->sagemaker==1.72.0) (2.8.1)
Building wheels for collected packages: sagemaker
Building wheel for sagemaker (setup.py) ... [?25ldone
[?25h Created wheel for sagemaker: filename=sagemaker-1.72.0-py2.py3-none-any.whl size=386358 sha256=231abdbe934b4ab5b52f0f82247a114a54c1101fe6a4d74a524f7b66d2b38f23
Stored in directory: /home/ec2-user/.cache/pip/wheels/c3/58/70/85faf4437568bfaa4c419937569ba1fe54d44c5db42406bbd7
Successfully built sagemaker
Installing collected packages: smdebug-rulesconfig, sagemaker
Attempting uninstall: smdebug-rulesconfig
Found existing installation: smdebug-rulesconfig 0.1.6
Uninstalling smdebug-rulesconfig-0.1.6:
Successfully uninstalled smdebug-rulesconfig-0.1.6
Attempting uninstall: sagemaker
Found existing installation: sagemaker 2.16.4.dev0
Uninstalling sagemaker-2.16.4.dev0:
Successfully uninstalled sagemaker-2.16.4.dev0
Successfully installed sagemaker-1.72.0 smdebug-rulesconfig-0.1.4
[33mWARNING: You are using pip version 20.0.2; however, version 20.3.2 is available.
You should consider upgrading via the '/home/ec2-user/anaconda3/envs/pytorch_p36/bin/python -m pip install --upgrade pip' command.[0m
###Markdown
Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
'get_image_uri' method will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='1.0-1'. For example:
get_image_uri(region, 'xgboost', '1.0-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2020-12-15 05:35:23 Starting - Launching requested ML instances......
2020-12-15 05:36:38 Starting - Preparing the instances for training......
2020-12-15 05:37:33 Downloading - Downloading input data...
2020-12-15 05:38:09 Training - Downloading the training image...
2020-12-15 05:38:35 Uploading - Uploading generated training model[34mArguments: train[0m
[34m[2020-12-15:05:38:30:INFO] Running standalone xgboost training.[0m
[34m[2020-12-15:05:38:30:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8436.02mb[0m
[34m[2020-12-15:05:38:30:INFO] Determined delimiter of CSV input is ','[0m
[34m[05:38:30] S3DistributionType set as FullyReplicated[0m
[34m[05:38:30] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2020-12-15:05:38:30:INFO] Determined delimiter of CSV input is ','[0m
[34m[05:38:30] S3DistributionType set as FullyReplicated[0m
[34m[05:38:30] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[0]#011train-rmse:18.984#011validation-rmse:19.9479[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[1]#011train-rmse:15.4652#011validation-rmse:16.3023[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=4[0m
[34m[2]#011train-rmse:12.706#011validation-rmse:13.5764[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[3]#011train-rmse:10.4943#011validation-rmse:11.3741[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[4]#011train-rmse:8.75168#011validation-rmse:9.76818[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[5]#011train-rmse:7.28873#011validation-rmse:8.38001[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[6]#011train-rmse:6.16892#011validation-rmse:7.40655[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[7]#011train-rmse:5.26377#011validation-rmse:6.71636[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[8]#011train-rmse:4.51142#011validation-rmse:6.14578[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[9]#011train-rmse:3.94237#011validation-rmse:5.79042[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[10]#011train-rmse:3.50182#011validation-rmse:5.56238[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[11]#011train-rmse:3.16446#011validation-rmse:5.4123[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[12]#011train-rmse:2.89326#011validation-rmse:5.33708[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[13]#011train-rmse:2.68234#011validation-rmse:5.30056[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14]#011train-rmse:2.51943#011validation-rmse:5.2469[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[15]#011train-rmse:2.35487#011validation-rmse:5.13537[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[16]#011train-rmse:2.22026#011validation-rmse:5.08894[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[17]#011train-rmse:2.09944#011validation-rmse:5.10653[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[18]#011train-rmse:2.00823#011validation-rmse:5.10315[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[19]#011train-rmse:1.8963#011validation-rmse:5.09775[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[20]#011train-rmse:1.85834#011validation-rmse:5.12082[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[21]#011train-rmse:1.79486#011validation-rmse:5.13729[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[22]#011train-rmse:1.76622#011validation-rmse:5.15724[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[23]#011train-rmse:1.74129#011validation-rmse:5.16444[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[24]#011train-rmse:1.71031#011validation-rmse:5.18369[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[25]#011train-rmse:1.65071#011validation-rmse:5.12887[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[26]#011train-rmse:1.6331#011validation-rmse:5.08413[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[27]#011train-rmse:1.6137#011validation-rmse:5.07654[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[28]#011train-rmse:1.58813#011validation-rmse:5.11564[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 6 pruned nodes, max_depth=3[0m
[34m[29]#011train-rmse:1.55355#011validation-rmse:5.08716[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[30]#011train-rmse:1.50818#011validation-rmse:5.08223[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[31]#011train-rmse:1.49784#011validation-rmse:5.05668[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[32]#011train-rmse:1.46564#011validation-rmse:5.07472[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[33]#011train-rmse:1.45501#011validation-rmse:5.08804[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[34]#011train-rmse:1.41418#011validation-rmse:5.09519[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[35]#011train-rmse:1.39002#011validation-rmse:5.10385[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[36]#011train-rmse:1.36221#011validation-rmse:5.1038[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 8 pruned nodes, max_depth=3[0m
[34m[37]#011train-rmse:1.35339#011validation-rmse:5.09853[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 10 pruned nodes, max_depth=4[0m
[34m[38]#011train-rmse:1.34296#011validation-rmse:5.10355[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[39]#011train-rmse:1.30992#011validation-rmse:5.08814[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 12 pruned nodes, max_depth=2[0m
[34m[40]#011train-rmse:1.30326#011validation-rmse:5.08625[0m
[34m[05:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[41]#011train-rmse:1.28969#011validation-rmse:5.09038[0m
[34mStopping. Best iteration:[0m
[34m[31]#011train-rmse:1.49784#011validation-rmse:5.05668
[0m
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
training_job_info
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
................................................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
Completed 2.3 KiB/2.3 KiB (25.5 KiB/s) with 1 file(s) remaining
download: s3://sagemaker-us-east-1-701904821656/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail.
###Code
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
###Output
Collecting sagemaker==1.72.0
Downloading sagemaker-1.72.0.tar.gz (297 kB)
[K |████████████████████████████████| 297 kB 16.2 MB/s eta 0:00:01
[?25hRequirement already satisfied: boto3>=1.14.12 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.16.63)
Requirement already satisfied: numpy>=1.9.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.19.5)
Requirement already satisfied: protobuf>=3.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.14.0)
Requirement already satisfied: scipy>=0.19.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.4.1)
Requirement already satisfied: protobuf3-to-dict>=0.1.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.5)
Requirement already satisfied: importlib-metadata>=1.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.4.0)
Requirement already satisfied: packaging>=20.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (20.8)
Collecting smdebug-rulesconfig==0.1.4
Downloading smdebug_rulesconfig-0.1.4-py2.py3-none-any.whl (10 kB)
Requirement already satisfied: botocore<1.20.0,>=1.19.63 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (1.19.63)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.10.0)
Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.3.4)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.20.0,>=1.19.63->boto3>=1.14.12->sagemaker==1.72.0) (2.8.1)
Requirement already satisfied: urllib3<1.27,>=1.25.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.20.0,>=1.19.63->boto3>=1.14.12->sagemaker==1.72.0) (1.26.2)
Requirement already satisfied: typing-extensions>=3.6.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.7.4.3)
Requirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.4.0)
Requirement already satisfied: pyparsing>=2.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging>=20.0->sagemaker==1.72.0) (2.4.7)
Requirement already satisfied: six>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (1.15.0)
Building wheels for collected packages: sagemaker
Building wheel for sagemaker (setup.py) ... [?25ldone
[?25h Created wheel for sagemaker: filename=sagemaker-1.72.0-py2.py3-none-any.whl size=386358 sha256=48317399b65c9776db0b563f0fbfa1793886e878eac84816982162e0acd1d0c8
Stored in directory: /home/ec2-user/.cache/pip/wheels/c3/58/70/85faf4437568bfaa4c419937569ba1fe54d44c5db42406bbd7
Successfully built sagemaker
Installing collected packages: smdebug-rulesconfig, sagemaker
Attempting uninstall: smdebug-rulesconfig
Found existing installation: smdebug-rulesconfig 1.0.1
Uninstalling smdebug-rulesconfig-1.0.1:
Successfully uninstalled smdebug-rulesconfig-1.0.1
Attempting uninstall: sagemaker
Found existing installation: sagemaker 2.24.1
Uninstalling sagemaker-2.24.1:
Successfully uninstalled sagemaker-2.24.1
Successfully installed sagemaker-1.72.0 smdebug-rulesconfig-0.1.4
[33mWARNING: You are using pip version 20.3.3; however, version 21.0.1 is available.
You should consider upgrading via the '/home/ec2-user/anaconda3/envs/pytorch_p36/bin/python -m pip install --upgrade pip' command.[0m
###Markdown
Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
'get_image_uri' method will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='1.0-1'. For example:
get_image_uri(region, 'xgboost', '1.0-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2021-02-27 13:54:07 Starting - Launching requested ML instances......
2021-02-27 13:55:19 Starting - Preparing the instances for training......
2021-02-27 13:56:13 Downloading - Downloading input data......
2021-02-27 13:57:12 Training - Downloading the training image..[34mArguments: train[0m
[34m[2021-02-27:13:57:34:INFO] Running standalone xgboost training.[0m
[34m[2021-02-27:13:57:34:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8449.49mb[0m
[34m[2021-02-27:13:57:34:INFO] Determined delimiter of CSV input is ','[0m
[34m[13:57:34] S3DistributionType set as FullyReplicated[0m
[34m[13:57:34] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2021-02-27:13:57:34:INFO] Determined delimiter of CSV input is ','[0m
[34m[13:57:34] S3DistributionType set as FullyReplicated[0m
[34m[13:57:34] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[0]#011train-rmse:20.0608#011validation-rmse:19.0298[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 2 pruned nodes, max_depth=3[0m
[34m[1]#011train-rmse:16.3366#011validation-rmse:15.8127[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[2]#011train-rmse:13.3987#011validation-rmse:13.4019[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[3]#011train-rmse:11.0663#011validation-rmse:11.5235[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=4[0m
[34m[4]#011train-rmse:9.16467#011validation-rmse:10.0746[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[5]#011train-rmse:7.64567#011validation-rmse:9.02348[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[6]#011train-rmse:6.3857#011validation-rmse:8.09156[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[7]#011train-rmse:5.39364#011validation-rmse:7.46407[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[8]#011train-rmse:4.59844#011validation-rmse:6.98317[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[9]#011train-rmse:3.95311#011validation-rmse:6.52888[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[10]#011train-rmse:3.46207#011validation-rmse:6.23252[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[11]#011train-rmse:3.06975#011validation-rmse:5.93991[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[12]#011train-rmse:2.76824#011validation-rmse:5.74425[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[13]#011train-rmse:2.54193#011validation-rmse:5.60229[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14]#011train-rmse:2.36777#011validation-rmse:5.50579[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[15]#011train-rmse:2.20177#011validation-rmse:5.38587[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[16]#011train-rmse:2.0846#011validation-rmse:5.33102[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[17]#011train-rmse:1.98126#011validation-rmse:5.2819[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[18]#011train-rmse:1.89307#011validation-rmse:5.24712[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[19]#011train-rmse:1.82845#011validation-rmse:5.21607[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[20]#011train-rmse:1.78562#011validation-rmse:5.19663[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[21]#011train-rmse:1.73874#011validation-rmse:5.17098[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[22]#011train-rmse:1.67583#011validation-rmse:5.09917[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[23]#011train-rmse:1.61748#011validation-rmse:5.066[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[24]#011train-rmse:1.58774#011validation-rmse:5.00404[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[25]#011train-rmse:1.55804#011validation-rmse:4.99677[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[26]#011train-rmse:1.52264#011validation-rmse:4.94742[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[27]#011train-rmse:1.49447#011validation-rmse:4.95277[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[28]#011train-rmse:1.46231#011validation-rmse:4.92121[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[29]#011train-rmse:1.45089#011validation-rmse:4.92931[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[30]#011train-rmse:1.41705#011validation-rmse:4.88958[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[31]#011train-rmse:1.40194#011validation-rmse:4.89071[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[32]#011train-rmse:1.36054#011validation-rmse:4.87265[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[33]#011train-rmse:1.34167#011validation-rmse:4.86506[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[34]#011train-rmse:1.31038#011validation-rmse:4.86944[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[35]#011train-rmse:1.28562#011validation-rmse:4.87601[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[36]#011train-rmse:1.26271#011validation-rmse:4.83716[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[37]#011train-rmse:1.23712#011validation-rmse:4.78403[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[38]#011train-rmse:1.21905#011validation-rmse:4.7747[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[39]#011train-rmse:1.17799#011validation-rmse:4.70338[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[40]#011train-rmse:1.16512#011validation-rmse:4.70558[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[41]#011train-rmse:1.13723#011validation-rmse:4.66688[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[42]#011train-rmse:1.11588#011validation-rmse:4.66099[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[43]#011train-rmse:1.1041#011validation-rmse:4.66518[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=4[0m
[34m[44]#011train-rmse:1.0922#011validation-rmse:4.66352[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[45]#011train-rmse:1.07671#011validation-rmse:4.67644[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 12 pruned nodes, max_depth=4[0m
[34m[46]#011train-rmse:1.06939#011validation-rmse:4.64436[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 10 pruned nodes, max_depth=4[0m
[34m[47]#011train-rmse:1.05356#011validation-rmse:4.63767[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 16 pruned nodes, max_depth=3[0m
[34m[48]#011train-rmse:1.04407#011validation-rmse:4.62777[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 10 pruned nodes, max_depth=4[0m
[34m[49]#011train-rmse:1.01713#011validation-rmse:4.57332[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[50]#011train-rmse:1.00466#011validation-rmse:4.57993[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[51]#011train-rmse:0.991865#011validation-rmse:4.58069[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 6 pruned nodes, max_depth=4[0m
[34m[52]#011train-rmse:0.980156#011validation-rmse:4.58837[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 16 pruned nodes, max_depth=4[0m
[34m[53]#011train-rmse:0.962048#011validation-rmse:4.56262[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[54]#011train-rmse:0.943057#011validation-rmse:4.55912[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 16 pruned nodes, max_depth=1[0m
[34m[55]#011train-rmse:0.941025#011validation-rmse:4.55312[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 16 pruned nodes, max_depth=2[0m
[34m[56]#011train-rmse:0.93587#011validation-rmse:4.55723[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 20 pruned nodes, max_depth=0[0m
[34m[57]#011train-rmse:0.9358#011validation-rmse:4.55454[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 16 pruned nodes, max_depth=3[0m
[34m[58]#011train-rmse:0.930816#011validation-rmse:4.53491[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[59]#011train-rmse:0.924456#011validation-rmse:4.55505[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[60]#011train-rmse:0.908459#011validation-rmse:4.56672[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 14 pruned nodes, max_depth=3[0m
[34m[61]#011train-rmse:0.893612#011validation-rmse:4.55944[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 14 pruned nodes, max_depth=0[0m
[34m[62]#011train-rmse:0.893611#011validation-rmse:4.56011[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 22 pruned nodes, max_depth=2[0m
[34m[63]#011train-rmse:0.891366#011validation-rmse:4.55623[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 22 pruned nodes, max_depth=0[0m
[34m[64]#011train-rmse:0.891328#011validation-rmse:4.55401[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 12 pruned nodes, max_depth=0[0m
[34m[65]#011train-rmse:0.891398#011validation-rmse:4.55182[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 14 pruned nodes, max_depth=0[0m
[34m[66]#011train-rmse:0.89139#011validation-rmse:4.55197[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 12 pruned nodes, max_depth=0[0m
[34m[67]#011train-rmse:0.891402#011validation-rmse:4.55175[0m
[34m[13:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 14 pruned nodes, max_depth=4[0m
[34m[68]#011train-rmse:0.881238#011validation-rmse:4.55492[0m
[34mStopping. Best iteration:[0m
[34m[58]#011train-rmse:0.930816#011validation-rmse:4.53491
[0m
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
............................................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
Completed 2.3 KiB/2.3 KiB (32.1 KiB/s) with 1 file(s) remaining
download: s3://sagemaker-us-east-1-208895044323/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail.
###Code
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
###Output
Collecting sagemaker==1.72.0
Downloading sagemaker-1.72.0.tar.gz (297 kB)
|████████████████████████████████| 297 kB 29.6 MB/s
[?25h Preparing metadata (setup.py) ... [?25ldone
[?25hRequirement already satisfied: boto3>=1.14.12 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.20.25)
Requirement already satisfied: numpy>=1.9.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.19.5)
Requirement already satisfied: protobuf>=3.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.17.2)
Requirement already satisfied: scipy>=0.19.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.5.3)
Requirement already satisfied: protobuf3-to-dict>=0.1.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.5)
Collecting smdebug-rulesconfig==0.1.4
Downloading smdebug_rulesconfig-0.1.4-py2.py3-none-any.whl (10 kB)
Requirement already satisfied: importlib-metadata>=1.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (4.5.0)
Requirement already satisfied: packaging>=20.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (21.3)
Requirement already satisfied: s3transfer<0.6.0,>=0.5.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.5.0)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.10.0)
Requirement already satisfied: botocore<1.24.0,>=1.23.25 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (1.23.25)
Requirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.4.1)
Requirement already satisfied: typing-extensions>=3.6.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.10.0.0)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging>=20.0->sagemaker==1.72.0) (2.4.7)
Requirement already satisfied: six>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (1.16.0)
Requirement already satisfied: urllib3<1.27,>=1.25.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.24.0,>=1.23.25->boto3>=1.14.12->sagemaker==1.72.0) (1.26.5)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.24.0,>=1.23.25->boto3>=1.14.12->sagemaker==1.72.0) (2.8.1)
Building wheels for collected packages: sagemaker
Building wheel for sagemaker (setup.py) ... [?25ldone
[?25h Created wheel for sagemaker: filename=sagemaker-1.72.0-py2.py3-none-any.whl size=388327 sha256=ea8f0ebad44896a05f66746153023f143b4d97bb7ca06690cc0045e6cc392fe5
Stored in directory: /home/ec2-user/.cache/pip/wheels/c3/58/70/85faf4437568bfaa4c419937569ba1fe54d44c5db42406bbd7
Successfully built sagemaker
Installing collected packages: smdebug-rulesconfig, sagemaker
Attempting uninstall: smdebug-rulesconfig
Found existing installation: smdebug-rulesconfig 1.0.1
Uninstalling smdebug-rulesconfig-1.0.1:
Successfully uninstalled smdebug-rulesconfig-1.0.1
Attempting uninstall: sagemaker
Found existing installation: sagemaker 2.72.1
Uninstalling sagemaker-2.72.1:
Successfully uninstalled sagemaker-2.72.1
Successfully installed sagemaker-1.72.0 smdebug-rulesconfig-0.1.4
###Markdown
Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
'get_image_uri' method will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='1.0-1'. For example:
get_image_uri(region, 'xgboost', '1.0-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2022-01-27 06:51:52 Starting - Starting the training job...
2022-01-27 06:51:54 Starting - Launching requested ML instances......
2022-01-27 06:52:56 Starting - Preparing the instances for training.........
2022-01-27 06:54:44 Downloading - Downloading input data
2022-01-27 06:54:44 Training - Downloading the training image...
2022-01-27 06:55:18 Uploading - Uploading generated training model
2022-01-27 06:55:18 Completed - Training job completed
[34mArguments: train[0m
[34m[2022-01-27:06:55:07:INFO] Running standalone xgboost training.[0m
[34m[2022-01-27:06:55:07:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8380.95mb[0m
[34m[2022-01-27:06:55:07:INFO] Determined delimiter of CSV input is ','[0m
[34m[06:55:07] S3DistributionType set as FullyReplicated[0m
[34m[06:55:07] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2022-01-27:06:55:07:INFO] Determined delimiter of CSV input is ','[0m
[34m[06:55:07] S3DistributionType set as FullyReplicated[0m
[34m[06:55:07] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[0]#011train-rmse:19.1785#011validation-rmse:20.0361[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[1]#011train-rmse:15.6863#011validation-rmse:16.516[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[2]#011train-rmse:12.9038#011validation-rmse:13.6218[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=4[0m
[34m[3]#011train-rmse:10.6658#011validation-rmse:11.2651[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[4]#011train-rmse:8.89163#011validation-rmse:9.41226[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[5]#011train-rmse:7.4353#011validation-rmse:7.94458[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[6]#011train-rmse:6.31402#011validation-rmse:6.779[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[7]#011train-rmse:5.39061#011validation-rmse:5.80495[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[8]#011train-rmse:4.66883#011validation-rmse:5.11774[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[9]#011train-rmse:4.06911#011validation-rmse:4.62374[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[10]#011train-rmse:3.58516#011validation-rmse:4.19038[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[11]#011train-rmse:3.2515#011validation-rmse:3.88393[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[12]#011train-rmse:2.96659#011validation-rmse:3.673[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[13]#011train-rmse:2.7733#011validation-rmse:3.52283[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[14]#011train-rmse:2.58201#011validation-rmse:3.39681[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[15]#011train-rmse:2.44465#011validation-rmse:3.26482[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[16]#011train-rmse:2.32244#011validation-rmse:3.15308[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[17]#011train-rmse:2.23117#011validation-rmse:3.10717[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[18]#011train-rmse:2.1558#011validation-rmse:3.06012[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[19]#011train-rmse:2.08741#011validation-rmse:3.02357[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[20]#011train-rmse:1.99486#011validation-rmse:3.00045[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[21]#011train-rmse:1.94344#011validation-rmse:2.99151[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[22]#011train-rmse:1.90939#011validation-rmse:2.97408[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[23]#011train-rmse:1.87688#011validation-rmse:2.95809[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[24]#011train-rmse:1.82621#011validation-rmse:2.954[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[25]#011train-rmse:1.74501#011validation-rmse:2.97405[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[26]#011train-rmse:1.71541#011validation-rmse:2.94461[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[27]#011train-rmse:1.64815#011validation-rmse:2.88185[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[28]#011train-rmse:1.60276#011validation-rmse:2.8879[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[29]#011train-rmse:1.58157#011validation-rmse:2.86942[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[30]#011train-rmse:1.51058#011validation-rmse:2.90442[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[31]#011train-rmse:1.47774#011validation-rmse:2.90363[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[32]#011train-rmse:1.42755#011validation-rmse:2.8976[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[33]#011train-rmse:1.40553#011validation-rmse:2.90635[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[34]#011train-rmse:1.36441#011validation-rmse:2.90152[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[35]#011train-rmse:1.34732#011validation-rmse:2.88096[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[36]#011train-rmse:1.33218#011validation-rmse:2.88864[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[37]#011train-rmse:1.30739#011validation-rmse:2.88929[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[38]#011train-rmse:1.29095#011validation-rmse:2.88719[0m
[34m[06:55:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=4[0m
[34m[39]#011train-rmse:1.25852#011validation-rmse:2.89391[0m
[34mStopping. Best iteration:[0m
[34m[29]#011train-rmse:1.58157#011validation-rmse:2.86942[0m
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
.....................................................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
Completed 3.0 KiB/3.0 KiB (36.9 KiB/s) with 1 file(s) remaining
download: s3://sagemaker-us-east-1-801008216402/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
_____no_output_____
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail.
###Code
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
###Output
_____no_output_____
###Markdown
Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
_____no_output_____
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
_____no_output_____
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
_____no_output_____
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
_____no_output_____
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost', "0.90-1")
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:squarederror",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
_____no_output_____
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2020-06-23 21:42:03 Starting - Starting the training job...
2020-06-23 21:42:04 Starting - Launching requested ML instances......
2020-06-23 21:43:08 Starting - Preparing the instances for training...
2020-06-23 21:43:53 Downloading - Downloading input data...
2020-06-23 21:44:10 Training - Downloading the training image..[34mINFO:sagemaker-containers:Imported framework sagemaker_xgboost_container.training[0m
[34mINFO:sagemaker-containers:Failed to parse hyperparameter objective value reg:squarederror to Json.[0m
[34mReturning the value itself[0m
[34mINFO:sagemaker-containers:No GPUs detected (normal if no gpus installed)[0m
[34mINFO:sagemaker_xgboost_container.training:Running XGBoost Sagemaker in algorithm mode[0m
[34mINFO:root:Determined delimiter of CSV input is ','[0m
[34mINFO:root:Determined delimiter of CSV input is ','[0m
[34mINFO:root:Determined delimiter of CSV input is ','[0m
[34m[21:44:44] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34mINFO:root:Determined delimiter of CSV input is ','[0m
[34m[21:44:44] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34mINFO:root:Single node training.[0m
[34mINFO:root:Train matrix has 227 rows[0m
[34mINFO:root:Validation matrix has 112 rows[0m
[34m[0]#011train-rmse:19.4169#011validation-rmse:19.641[0m
[34m[1]#011train-rmse:15.8529#011validation-rmse:16.1373[0m
[34m[2]#011train-rmse:13.0029#011validation-rmse:13.4451[0m
[34m[3]#011train-rmse:10.7222#011validation-rmse:11.2891[0m
[34m[4]#011train-rmse:8.87763#011validation-rmse:9.57846[0m
[34m[5]#011train-rmse:7.39233#011validation-rmse:8.18051[0m
[34m[6]#011train-rmse:6.24306#011validation-rmse:7.16369[0m
[34m[7]#011train-rmse:5.31481#011validation-rmse:6.32078[0m
[34m[8]#011train-rmse:4.63824#011validation-rmse:5.7511[0m
[34m[9]#011train-rmse:4.09938#011validation-rmse:5.35858[0m
[34m[10]#011train-rmse:3.65126#011validation-rmse:4.96591[0m
[34m[11]#011train-rmse:3.27765#011validation-rmse:4.73338[0m
[34m[12]#011train-rmse:3.01432#011validation-rmse:4.56817[0m
[34m[13]#011train-rmse:2.78907#011validation-rmse:4.44538[0m
[34m[14]#011train-rmse:2.5708#011validation-rmse:4.28684[0m
[34m[15]#011train-rmse:2.39371#011validation-rmse:4.20217[0m
[34m[16]#011train-rmse:2.30082#011validation-rmse:4.14013[0m
[34m[17]#011train-rmse:2.23106#011validation-rmse:4.1129[0m
[34m[18]#011train-rmse:2.17922#011validation-rmse:4.09605[0m
[34m[19]#011train-rmse:2.10528#011validation-rmse:4.07187[0m
[34m[20]#011train-rmse:2.05327#011validation-rmse:4.07175[0m
[34m[21]#011train-rmse:1.93385#011validation-rmse:3.99626[0m
[34m[22]#011train-rmse:1.90321#011validation-rmse:3.99341[0m
[34m[23]#011train-rmse:1.8683#011validation-rmse:4.00524[0m
[34m[24]#011train-rmse:1.83214#011validation-rmse:4.00157[0m
[34m[25]#011train-rmse:1.82661#011validation-rmse:4.00195[0m
[34m[26]#011train-rmse:1.81126#011validation-rmse:3.99474[0m
[34m[27]#011train-rmse:1.78626#011validation-rmse:3.99312[0m
[34m[28]#011train-rmse:1.72696#011validation-rmse:3.95843[0m
[34m[29]#011train-rmse:1.67332#011validation-rmse:3.93007[0m
[34m[30]#011train-rmse:1.66607#011validation-rmse:3.91736[0m
[34m[31]#011train-rmse:1.64282#011validation-rmse:3.92085[0m
[34m[32]#011train-rmse:1.61#011validation-rmse:3.90635[0m
[34m[33]#011train-rmse:1.56002#011validation-rmse:3.91086[0m
[34m[34]#011train-rmse:1.53761#011validation-rmse:3.89514[0m
[34m[35]#011train-rmse:1.52241#011validation-rmse:3.90529[0m
[34m[36]#011train-rmse:1.4775#011validation-rmse:3.89349[0m
[34m[37]#011train-rmse:1.44391#011validation-rmse:3.88407[0m
[34m[38]#011train-rmse:1.42582#011validation-rmse:3.89022[0m
[34m[39]#011train-rmse:1.38144#011validation-rmse:3.89282[0m
[34m[40]#011train-rmse:1.34165#011validation-rmse:3.83096[0m
[34m[41]#011train-rmse:1.33244#011validation-rmse:3.8405[0m
[34m[42]#011train-rmse:1.29051#011validation-rmse:3.83612[0m
[34m[43]#011train-rmse:1.2678#011validation-rmse:3.83449[0m
[34m[44]#011train-rmse:1.23916#011validation-rmse:3.84452[0m
[34m[45]#011train-rmse:1.22828#011validation-rmse:3.84376[0m
[34m[46]#011train-rmse:1.21423#011validation-rmse:3.85553[0m
[34m[47]#011train-rmse:1.18594#011validation-rmse:3.82262[0m
[34m[48]#011train-rmse:1.17491#011validation-rmse:3.80016[0m
[34m[49]#011train-rmse:1.14187#011validation-rmse:3.78906[0m
[34m[50]#011train-rmse:1.12692#011validation-rmse:3.78793[0m
[34m[51]#011train-rmse:1.11183#011validation-rmse:3.77611[0m
[34m[52]#011train-rmse:1.10696#011validation-rmse:3.77152[0m
[34m[53]#011train-rmse:1.10739#011validation-rmse:3.77851[0m
[34m[54]#011train-rmse:1.09932#011validation-rmse:3.78683[0m
[34m[55]#011train-rmse:1.08263#011validation-rmse:3.79381[0m
[34m[56]#011train-rmse:1.07078#011validation-rmse:3.78421[0m
[34m[57]#011train-rmse:1.04803#011validation-rmse:3.76428[0m
[34m[58]#011train-rmse:1.0479#011validation-rmse:3.76285[0m
[34m[59]#011train-rmse:1.03638#011validation-rmse:3.76034[0m
[34m[60]#011train-rmse:1.02579#011validation-rmse:3.76296[0m
[34m[61]#011train-rmse:1.01148#011validation-rmse:3.77006[0m
[34m[62]#011train-rmse:0.995093#011validation-rmse:3.76368[0m
[34m[63]#011train-rmse:0.967382#011validation-rmse:3.75709[0m
[34m[64]#011train-rmse:0.948549#011validation-rmse:3.74857[0m
[34m[65]#011train-rmse:0.937186#011validation-rmse:3.75786[0m
[34m[66]#011train-rmse:0.937267#011validation-rmse:3.75906[0m
[34m[67]#011train-rmse:0.926992#011validation-rmse:3.75133[0m
[34m[68]#011train-rmse:0.924526#011validation-rmse:3.74623[0m
[34m[69]#011train-rmse:0.910455#011validation-rmse:3.74442[0m
[34m[70]#011train-rmse:0.904081#011validation-rmse:3.73933[0m
[34m[71]#011train-rmse:0.904084#011validation-rmse:3.73955[0m
[34m[72]#011train-rmse:0.895776#011validation-rmse:3.73751[0m
[34m[73]#011train-rmse:0.893845#011validation-rmse:3.73983[0m
[34m[74]#011train-rmse:0.893857#011validation-rmse:3.74017[0m
[34m[75]#011train-rmse:0.885109#011validation-rmse:3.72714[0m
[34m[76]#011train-rmse:0.875395#011validation-rmse:3.74106[0m
[34m[77]#011train-rmse:0.87532#011validation-rmse:3.74396[0m
[34m[78]#011train-rmse:0.863468#011validation-rmse:3.74125[0m
[34m[79]#011train-rmse:0.863484#011validation-rmse:3.74083[0m
[34m[80]#011train-rmse:0.863489#011validation-rmse:3.74075[0m
[34m[81]#011train-rmse:0.863543#011validation-rmse:3.7402[0m
[34m[82]#011train-rmse:0.858926#011validation-rmse:3.74235[0m
[34m[83]#011train-rmse:0.858965#011validation-rmse:3.74322[0m
[34m[84]#011train-rmse:0.858954#011validation-rmse:3.74307[0m
[34m[85]#011train-rmse:0.858978#011validation-rmse:3.74335[0m
2020-06-23 21:44:55 Uploading - Uploading generated training model
2020-06-23 21:44:55 Completed - Training job completed
Training seconds: 62
Billable seconds: 62
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
.............................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
Completed 3.0 KiB/3.0 KiB (68.2 KiB/s) with 1 file(s) remaining
download: s3://sagemaker-eu-central-1-648654006923/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_csv_location = os.path.join(data_dir, 'test.csv')
validation_csv_location = os.path.join(data_dir, 'validation.csv')
train_csv_location = os.path.join(data_dir, 'train.csv')
test_location = session.upload_data(test_csv_location, key_prefix=prefix)
val_location = session.upload_data(validation_csv_location, key_prefix=prefix)
train_location = session.upload_data(train_csv_location, key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
'get_image_uri' method will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='1.0-1'. For example:
get_image_uri(region, 'xgboost', '1.0-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
_____no_output_____
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
_____no_output_____
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
_____no_output_____
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
WARNING:root:There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='0.90-1'. For example:
get_image_uri(region, 'xgboost', '0.90-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2019-12-09 01:42:48 Starting - Launching requested ML instances......
2019-12-09 01:43:50 Starting - Preparing the instances for training......
2019-12-09 01:44:47 Downloading - Downloading input data
2019-12-09 01:44:47 Training - Downloading the training image..[34mArguments: train[0m
[34m[2019-12-09:01:45:08:INFO] Running standalone xgboost training.[0m
[34m[2019-12-09:01:45:08:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8529.59mb[0m
[34m[2019-12-09:01:45:08:INFO] Determined delimiter of CSV input is ','[0m
[34m[01:45:08] S3DistributionType set as FullyReplicated[0m
[34m[01:45:08] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2019-12-09:01:45:08:INFO] Determined delimiter of CSV input is ','[0m
[34m[01:45:08] S3DistributionType set as FullyReplicated[0m
[34m[01:45:08] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[0]#011train-rmse:19.918#011validation-rmse:19.085[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[1]#011train-rmse:16.2724#011validation-rmse:15.6833[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[2]#011train-rmse:13.3269#011validation-rmse:13.0549[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[3]#011train-rmse:10.9492#011validation-rmse:10.9054[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[4]#011train-rmse:9.03109#011validation-rmse:9.16564[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=4[0m
[34m[5]#011train-rmse:7.52404#011validation-rmse:7.87403[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[6]#011train-rmse:6.32094#011validation-rmse:6.86839[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[7]#011train-rmse:5.39129#011validation-rmse:6.0848[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[8]#011train-rmse:4.61563#011validation-rmse:5.40755[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[9]#011train-rmse:3.99771#011validation-rmse:4.96312[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[10]#011train-rmse:3.50892#011validation-rmse:4.57958[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[11]#011train-rmse:3.16067#011validation-rmse:4.35067[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[12]#011train-rmse:2.91093#011validation-rmse:4.22765[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[13]#011train-rmse:2.66168#011validation-rmse:4.07318[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14]#011train-rmse:2.49617#011validation-rmse:3.91084[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[15]#011train-rmse:2.32599#011validation-rmse:3.83972[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[16]#011train-rmse:2.24373#011validation-rmse:3.80865[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[17]#011train-rmse:2.15963#011validation-rmse:3.71221[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[18]#011train-rmse:2.0735#011validation-rmse:3.61791[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[19]#011train-rmse:2.02978#011validation-rmse:3.59335[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[20]#011train-rmse:1.96399#011validation-rmse:3.50442[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[21]#011train-rmse:1.92345#011validation-rmse:3.47942[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[22]#011train-rmse:1.87782#011validation-rmse:3.4293[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[23]#011train-rmse:1.85929#011validation-rmse:3.44877[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[24]#011train-rmse:1.80316#011validation-rmse:3.38878[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[25]#011train-rmse:1.77382#011validation-rmse:3.37492[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[26]#011train-rmse:1.73347#011validation-rmse:3.33693[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[27]#011train-rmse:1.7104#011validation-rmse:3.35204[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[28]#011train-rmse:1.67814#011validation-rmse:3.34028[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[29]#011train-rmse:1.65713#011validation-rmse:3.31881[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[30]#011train-rmse:1.61047#011validation-rmse:3.30195[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[31]#011train-rmse:1.58015#011validation-rmse:3.27458[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[32]#011train-rmse:1.55832#011validation-rmse:3.26881[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[33]#011train-rmse:1.53436#011validation-rmse:3.28241[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[34]#011train-rmse:1.50867#011validation-rmse:3.2759[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[35]#011train-rmse:1.4766#011validation-rmse:3.27634[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[36]#011train-rmse:1.46194#011validation-rmse:3.27298[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[37]#011train-rmse:1.45128#011validation-rmse:3.29083[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[38]#011train-rmse:1.41516#011validation-rmse:3.27968[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 8 pruned nodes, max_depth=4[0m
[34m[39]#011train-rmse:1.39942#011validation-rmse:3.29905[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[40]#011train-rmse:1.37304#011validation-rmse:3.28036[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 6 pruned nodes, max_depth=3[0m
[34m[41]#011train-rmse:1.36478#011validation-rmse:3.27736[0m
[34m[01:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[42]#011train-rmse:1.3442#011validation-rmse:3.29244[0m
[34mStopping. Best iteration:[0m
[34m[32]#011train-rmse:1.55832#011validation-rmse:3.26881
[0m
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
.........................
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
_____no_output_____
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail.
###Code
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
###Output
Requirement already satisfied: sagemaker==1.72.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (1.72.0)
Requirement already satisfied: importlib-metadata>=1.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.7.0)
Requirement already satisfied: boto3>=1.14.12 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.17.22)
Requirement already satisfied: numpy>=1.9.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.19.5)
Requirement already satisfied: protobuf3-to-dict>=0.1.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.5)
Requirement already satisfied: packaging>=20.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (20.9)
Requirement already satisfied: protobuf>=3.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.15.2)
Requirement already satisfied: scipy>=0.19.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.5.3)
Requirement already satisfied: smdebug-rulesconfig==0.1.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.4)
Requirement already satisfied: botocore<1.21.0,>=1.20.22 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (1.20.22)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.10.0)
Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.3.4)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.21.0,>=1.20.22->boto3>=1.14.12->sagemaker==1.72.0) (2.8.1)
Requirement already satisfied: urllib3<1.27,>=1.25.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.21.0,>=1.20.22->boto3>=1.14.12->sagemaker==1.72.0) (1.26.3)
Requirement already satisfied: typing-extensions>=3.6.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.7.4.3)
Requirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.4.0)
Requirement already satisfied: pyparsing>=2.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging>=20.0->sagemaker==1.72.0) (2.4.7)
Requirement already satisfied: six>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (1.15.0)
###Markdown
Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
'get_image_uri' method will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='1.0-1'. For example:
get_image_uri(region, 'xgboost', '1.0-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2021-03-16 03:47:23 Starting - Starting the training job...
2021-03-16 03:47:25 Starting - Launching requested ML instances......
2021-03-16 03:48:35 Starting - Preparing the instances for training......
2021-03-16 03:49:42 Downloading - Downloading input data...
2021-03-16 03:50:16 Training - Downloading the training image...
2021-03-16 03:50:51 Uploading - Uploading generated training model
2021-03-16 03:50:51 Completed - Training job completed
[34mArguments: train[0m
[34m[2021-03-16:03:50:38:INFO] Running standalone xgboost training.[0m
[34m[2021-03-16:03:50:38:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8451.11mb[0m
[34m[2021-03-16:03:50:38:INFO] Determined delimiter of CSV input is ','[0m
[34m[03:50:38] S3DistributionType set as FullyReplicated[0m
[34m[03:50:38] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2021-03-16:03:50:38:INFO] Determined delimiter of CSV input is ','[0m
[34m[03:50:38] S3DistributionType set as FullyReplicated[0m
[34m[03:50:38] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[03:50:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[0]#011train-rmse:19.9053#011validation-rmse:19.3787[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[03:50:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[1]#011train-rmse:16.2943#011validation-rmse:15.6473[0m
[34m[03:50:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[2]#011train-rmse:13.3378#011validation-rmse:12.653[0m
[34m[03:50:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[3]#011train-rmse:11.0154#011validation-rmse:10.2807[0m
[34m[03:50:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[4]#011train-rmse:9.18868#011validation-rmse:8.45517[0m
[34m[03:50:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[5]#011train-rmse:7.66405#011validation-rmse:7.04446[0m
[34m[03:50:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[6]#011train-rmse:6.45584#011validation-rmse:5.88588[0m
[34m[03:50:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[7]#011train-rmse:5.52834#011validation-rmse:5.11887[0m
[34m[03:50:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[8]#011train-rmse:4.78533#011validation-rmse:4.49746[0m
[34m[03:50:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[9]#011train-rmse:4.21295#011validation-rmse:4.0189[0m
[34m[03:50:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[10]#011train-rmse:3.66583#011validation-rmse:3.66532[0m
[34m[03:50:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[11]#011train-rmse:3.32003#011validation-rmse:3.46826[0m
[34m[03:50:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[12]#011train-rmse:3.03777#011validation-rmse:3.36642[0m
[34m[03:50:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[13]#011train-rmse:2.82398#011validation-rmse:3.24594[0m
[34m[03:50:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14]#011train-rmse:2.58092#011validation-rmse:3.21125[0m
[34m[03:50:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[15]#011train-rmse:2.44857#011validation-rmse:3.20577[0m
[34m[03:50:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[16]#011train-rmse:2.35242#011validation-rmse:3.19318[0m
[34m[03:50:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[17]#011train-rmse:2.27612#011validation-rmse:3.17992[0m
[34m[03:50:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[18]#011train-rmse:2.16207#011validation-rmse:3.17642[0m
[34m[03:50:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[19]#011train-rmse:2.11067#011validation-rmse:3.17344[0m
[34m[03:50:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[20]#011train-rmse:2.0635#011validation-rmse:3.19217[0m
[34m[03:50:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[21]#011train-rmse:2.02635#011validation-rmse:3.20497[0m
[34m[03:50:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[22]#011train-rmse:1.98223#011validation-rmse:3.25316[0m
[34m[03:50:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[23]#011train-rmse:1.95131#011validation-rmse:3.28258[0m
[34m[03:50:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[24]#011train-rmse:1.86463#011validation-rmse:3.2873[0m
[34m[03:50:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[25]#011train-rmse:1.82788#011validation-rmse:3.28926[0m
[34m[03:50:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[26]#011train-rmse:1.73945#011validation-rmse:3.24255[0m
[34m[03:50:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[27]#011train-rmse:1.70105#011validation-rmse:3.24983[0m
[34m[03:50:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[28]#011train-rmse:1.69244#011validation-rmse:3.23787[0m
[34m[03:50:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[29]#011train-rmse:1.64765#011validation-rmse:3.24789[0m
[34mStopping. Best iteration:[0m
[34m[19]#011train-rmse:2.11067#011validation-rmse:3.17344
[0m
Training seconds: 69
Billable seconds: 69
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
download: s3://sagemaker-us-east-1-399684875495/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
WARNING:root:There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='0.90-1'. For example:
get_image_uri(region, 'xgboost', '0.90-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2020-02-28 13:04:30 Starting - Launching requested ML instances......
2020-02-28 13:05:31 Starting - Preparing the instances for training......
2020-02-28 13:06:19 Downloading - Downloading input data...
2020-02-28 13:06:41 Training - Downloading the training image............................[34mArguments: train[0m
[34m[2020-02-28:13:11:37:INFO] Running standalone xgboost training.[0m
[34m[2020-02-28:13:11:37:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8512.81mb[0m
[34m[2020-02-28:13:11:37:INFO] Determined delimiter of CSV input is ','[0m
[34m[13:11:37] S3DistributionType set as FullyReplicated[0m
[34m[13:11:37] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2020-02-28:13:11:37:INFO] Determined delimiter of CSV input is ','[0m
[34m[13:11:37] S3DistributionType set as FullyReplicated[0m
[34m[13:11:37] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[0]#011train-rmse:19.1282#011validation-rmse:19.651[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[1]#011train-rmse:15.66#011validation-rmse:16.0924[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[2]#011train-rmse:12.9098#011validation-rmse:13.416[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[3]#011train-rmse:10.6262#011validation-rmse:11.3257[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[4]#011train-rmse:8.87761#011validation-rmse:9.5755[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[5]#011train-rmse:7.40904#011validation-rmse:8.20498[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[6]#011train-rmse:6.27466#011validation-rmse:7.18218[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[7]#011train-rmse:5.4164#011validation-rmse:6.54497[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[8]#011train-rmse:4.67322#011validation-rmse:5.96538[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[9]#011train-rmse:4.02861#011validation-rmse:5.5919[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[10]#011train-rmse:3.53993#011validation-rmse:5.34213[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[11]#011train-rmse:3.17367#011validation-rmse:5.16933[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[12]#011train-rmse:2.87655#011validation-rmse:5.09892[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[13]#011train-rmse:2.65167#011validation-rmse:5.04437[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14]#011train-rmse:2.49887#011validation-rmse:5.02001[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[15]#011train-rmse:2.39266#011validation-rmse:4.9158[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[16]#011train-rmse:2.27847#011validation-rmse:4.94848[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[17]#011train-rmse:2.20804#011validation-rmse:4.95132[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[18]#011train-rmse:2.13496#011validation-rmse:4.97691[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[19]#011train-rmse:2.07476#011validation-rmse:4.97078[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[20]#011train-rmse:1.97475#011validation-rmse:4.90105[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[21]#011train-rmse:1.93538#011validation-rmse:4.90521[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[22]#011train-rmse:1.89927#011validation-rmse:4.85133[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[23]#011train-rmse:1.82637#011validation-rmse:4.81944[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[24]#011train-rmse:1.74334#011validation-rmse:4.75641[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[25]#011train-rmse:1.6813#011validation-rmse:4.75552[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[26]#011train-rmse:1.6645#011validation-rmse:4.72802[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[27]#011train-rmse:1.64268#011validation-rmse:4.76211[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=4[0m
[34m[28]#011train-rmse:1.58309#011validation-rmse:4.75592[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[29]#011train-rmse:1.56005#011validation-rmse:4.77664[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[30]#011train-rmse:1.51122#011validation-rmse:4.77406[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[31]#011train-rmse:1.47957#011validation-rmse:4.78212[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=4[0m
[34m[32]#011train-rmse:1.46421#011validation-rmse:4.80585[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=4[0m
[34m[33]#011train-rmse:1.42197#011validation-rmse:4.84643[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 4 pruned nodes, max_depth=3[0m
[34m[34]#011train-rmse:1.39716#011validation-rmse:4.83965[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[35]#011train-rmse:1.35511#011validation-rmse:4.81569[0m
[34m[13:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[36]#011train-rmse:1.31459#011validation-rmse:4.78686[0m
[34mStopping. Best iteration:[0m
[34m[26]#011train-rmse:1.6645#011validation-rmse:4.72802
[0m
2020-02-28 13:12:08 Uploading - Uploading generated training model
2020-02-28 13:12:08 Completed - Training job completed
Training seconds: 349
Billable seconds: 349
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
.....................................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
download: s3://sagemaker-us-east-1-788544388985/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost') # same as high level
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
'get_image_uri' method will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='1.0-1'. For example:
get_image_uri(region, 'xgboost', '1.0-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True) # similar to the wait function in high level api
###Output
2020-07-31 11:54:32 Starting - Launching requested ML instances.........
2020-07-31 11:55:35 Starting - Preparing the instances for training...
2020-07-31 11:56:27 Downloading - Downloading input data...
2020-07-31 11:56:51 Training - Downloading the training image..[34mArguments: train[0m
[34m[2020-07-31:11:57:11:INFO] Running standalone xgboost training.[0m
[34m[2020-07-31:11:57:11:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8487.35mb[0m
[34m[2020-07-31:11:57:11:INFO] Determined delimiter of CSV input is ','[0m
[34m[11:57:11] S3DistributionType set as FullyReplicated[0m
[34m[11:57:11] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2020-07-31:11:57:11:INFO] Determined delimiter of CSV input is ','[0m
[34m[11:57:11] S3DistributionType set as FullyReplicated[0m
[34m[11:57:11] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[0]#011train-rmse:19.2997#011validation-rmse:20.0503[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[1]#011train-rmse:15.7879#011validation-rmse:16.4663[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[2]#011train-rmse:12.9531#011validation-rmse:13.7761[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[3]#011train-rmse:10.6873#011validation-rmse:11.6875[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[4]#011train-rmse:8.86059#011validation-rmse:9.8643[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[5]#011train-rmse:7.46068#011validation-rmse:8.50082[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[6]#011train-rmse:6.28992#011validation-rmse:7.51623[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[7]#011train-rmse:5.35259#011validation-rmse:6.6466[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[8]#011train-rmse:4.62512#011validation-rmse:6.14[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[9]#011train-rmse:3.99746#011validation-rmse:5.70948[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[10]#011train-rmse:3.53057#011validation-rmse:5.40394[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[11]#011train-rmse:3.17656#011validation-rmse:5.22955[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[12]#011train-rmse:2.84818#011validation-rmse:5.01507[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[13]#011train-rmse:2.64728#011validation-rmse:4.88723[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[14]#011train-rmse:2.39939#011validation-rmse:4.77311[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[15]#011train-rmse:2.22143#011validation-rmse:4.69811[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[16]#011train-rmse:2.10293#011validation-rmse:4.64162[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[17]#011train-rmse:2.04096#011validation-rmse:4.62858[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[18]#011train-rmse:1.96032#011validation-rmse:4.57165[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[19]#011train-rmse:1.89459#011validation-rmse:4.5496[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[20]#011train-rmse:1.79997#011validation-rmse:4.50526[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[21]#011train-rmse:1.73866#011validation-rmse:4.46751[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[22]#011train-rmse:1.6879#011validation-rmse:4.45311[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[23]#011train-rmse:1.6197#011validation-rmse:4.44003[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[24]#011train-rmse:1.57024#011validation-rmse:4.40152[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[25]#011train-rmse:1.52113#011validation-rmse:4.3828[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[26]#011train-rmse:1.45982#011validation-rmse:4.37184[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[27]#011train-rmse:1.42523#011validation-rmse:4.35548[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[28]#011train-rmse:1.40538#011validation-rmse:4.34959[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[29]#011train-rmse:1.36773#011validation-rmse:4.35187[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[30]#011train-rmse:1.33362#011validation-rmse:4.35291[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[31]#011train-rmse:1.29462#011validation-rmse:4.33206[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=3[0m
[34m[32]#011train-rmse:1.27391#011validation-rmse:4.32171[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[33]#011train-rmse:1.24919#011validation-rmse:4.31084[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[34]#011train-rmse:1.22695#011validation-rmse:4.30208[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[35]#011train-rmse:1.199#011validation-rmse:4.28338[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 10 pruned nodes, max_depth=3[0m
[34m[36]#011train-rmse:1.18535#011validation-rmse:4.2761[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[37]#011train-rmse:1.14829#011validation-rmse:4.27898[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[38]#011train-rmse:1.11373#011validation-rmse:4.26382[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 8 pruned nodes, max_depth=3[0m
[34m[39]#011train-rmse:1.10394#011validation-rmse:4.25782[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[40]#011train-rmse:1.07422#011validation-rmse:4.23094[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[41]#011train-rmse:1.05978#011validation-rmse:4.24631[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[42]#011train-rmse:1.04665#011validation-rmse:4.24319[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[43]#011train-rmse:1.0113#011validation-rmse:4.23246[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[44]#011train-rmse:0.989098#011validation-rmse:4.22268[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 18 pruned nodes, max_depth=5[0m
[34m[45]#011train-rmse:0.965392#011validation-rmse:4.21767[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[46]#011train-rmse:0.951218#011validation-rmse:4.21404[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 14 pruned nodes, max_depth=4[0m
[34m[47]#011train-rmse:0.928882#011validation-rmse:4.21859[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 12 pruned nodes, max_depth=3[0m
[34m[48]#011train-rmse:0.923306#011validation-rmse:4.21352[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[49]#011train-rmse:0.916149#011validation-rmse:4.20913[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 18 pruned nodes, max_depth=0[0m
[34m[50]#011train-rmse:0.916142#011validation-rmse:4.20884[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[51]#011train-rmse:0.904529#011validation-rmse:4.20993[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 26 pruned nodes, max_depth=1[0m
[34m[52]#011train-rmse:0.903784#011validation-rmse:4.21303[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 16 pruned nodes, max_depth=0[0m
[34m[53]#011train-rmse:0.903675#011validation-rmse:4.2133[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[54]#011train-rmse:0.891688#011validation-rmse:4.21372[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 10 pruned nodes, max_depth=0[0m
[34m[55]#011train-rmse:0.89169#011validation-rmse:4.21366[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 14 pruned nodes, max_depth=1[0m
[34m[56]#011train-rmse:0.888734#011validation-rmse:4.21405[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[57]#011train-rmse:0.877721#011validation-rmse:4.20517[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 18 pruned nodes, max_depth=0[0m
[34m[58]#011train-rmse:0.877744#011validation-rmse:4.20487[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 20 pruned nodes, max_depth=3[0m
[34m[59]#011train-rmse:0.861891#011validation-rmse:4.2055[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 20 pruned nodes, max_depth=0[0m
[34m[60]#011train-rmse:0.861925#011validation-rmse:4.20523[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 18 pruned nodes, max_depth=4[0m
[34m[61]#011train-rmse:0.854775#011validation-rmse:4.2069[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 20 pruned nodes, max_depth=0[0m
[34m[62]#011train-rmse:0.854818#011validation-rmse:4.20662[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 18 pruned nodes, max_depth=4[0m
[34m[63]#011train-rmse:0.842786#011validation-rmse:4.20357[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 18 pruned nodes, max_depth=0[0m
[34m[64]#011train-rmse:0.842738#011validation-rmse:4.20379[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 22 pruned nodes, max_depth=4[0m
[34m[65]#011train-rmse:0.822452#011validation-rmse:4.19721[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 22 pruned nodes, max_depth=0[0m
[34m[66]#011train-rmse:0.822457#011validation-rmse:4.19759[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 28 pruned nodes, max_depth=0[0m
[34m[67]#011train-rmse:0.822461#011validation-rmse:4.19761[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 16 pruned nodes, max_depth=0[0m
[34m[68]#011train-rmse:0.822514#011validation-rmse:4.19784[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 12 pruned nodes, max_depth=4[0m
[34m[69]#011train-rmse:0.814201#011validation-rmse:4.20046[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 20 pruned nodes, max_depth=2[0m
[34m[70]#011train-rmse:0.809872#011validation-rmse:4.19646[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 22 pruned nodes, max_depth=0[0m
[34m[71]#011train-rmse:0.80988#011validation-rmse:4.1964[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 30 pruned nodes, max_depth=2[0m
[34m[72]#011train-rmse:0.80706#011validation-rmse:4.19296[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 16 pruned nodes, max_depth=0[0m
[34m[73]#011train-rmse:0.807068#011validation-rmse:4.19322[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 14 pruned nodes, max_depth=3[0m
[34m[74]#011train-rmse:0.802694#011validation-rmse:4.19477[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 18 pruned nodes, max_depth=0[0m
[34m[75]#011train-rmse:0.802715#011validation-rmse:4.19498[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 20 pruned nodes, max_depth=0[0m
[34m[76]#011train-rmse:0.802694#011validation-rmse:4.19474[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 14 pruned nodes, max_depth=0[0m
[34m[77]#011train-rmse:0.802705#011validation-rmse:4.1949[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 24 pruned nodes, max_depth=0[0m
[34m[78]#011train-rmse:0.802697#011validation-rmse:4.19482[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 18 pruned nodes, max_depth=2[0m
[34m[79]#011train-rmse:0.798512#011validation-rmse:4.18983[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 20 pruned nodes, max_depth=3[0m
[34m[80]#011train-rmse:0.791222#011validation-rmse:4.18476[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 18 pruned nodes, max_depth=0[0m
[34m[81]#011train-rmse:0.791208#011validation-rmse:4.18471[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 14 pruned nodes, max_depth=0[0m
[34m[82]#011train-rmse:0.791209#011validation-rmse:4.18471[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 22 pruned nodes, max_depth=0[0m
[34m[83]#011train-rmse:0.791248#011validation-rmse:4.18483[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 26 pruned nodes, max_depth=0[0m
[34m[84]#011train-rmse:0.791158#011validation-rmse:4.18451[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 24 pruned nodes, max_depth=0[0m
[34m[85]#011train-rmse:0.791185#011validation-rmse:4.18463[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 20 pruned nodes, max_depth=4[0m
[34m[86]#011train-rmse:0.787312#011validation-rmse:4.17944[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 26 pruned nodes, max_depth=0[0m
[34m[87]#011train-rmse:0.787266#011validation-rmse:4.17922[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 16 pruned nodes, max_depth=0[0m
[34m[88]#011train-rmse:0.787312#011validation-rmse:4.17945[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 20 pruned nodes, max_depth=0[0m
[34m[89]#011train-rmse:0.787304#011validation-rmse:4.17941[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 12 pruned nodes, max_depth=0[0m
[34m[90]#011train-rmse:0.787493#011validation-rmse:4.17992[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 8 pruned nodes, max_depth=4[0m
[34m[91]#011train-rmse:0.781909#011validation-rmse:4.18006[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 18 pruned nodes, max_depth=4[0m
[34m[92]#011train-rmse:0.772241#011validation-rmse:4.17979[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 22 pruned nodes, max_depth=0[0m
[34m[93]#011train-rmse:0.7719#011validation-rmse:4.17904[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 22 pruned nodes, max_depth=0[0m
[34m[94]#011train-rmse:0.771884#011validation-rmse:4.17896[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 18 pruned nodes, max_depth=3[0m
[34m[95]#011train-rmse:0.767925#011validation-rmse:4.17605[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 16 pruned nodes, max_depth=0[0m
[34m[96]#011train-rmse:0.767904#011validation-rmse:4.17592[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[97]#011train-rmse:0.758586#011validation-rmse:4.17219[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 30 pruned nodes, max_depth=0[0m
[34m[98]#011train-rmse:0.758628#011validation-rmse:4.17184[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 18 pruned nodes, max_depth=0[0m
[34m[99]#011train-rmse:0.758675#011validation-rmse:4.1717[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 26 pruned nodes, max_depth=0[0m
[34m[100]#011train-rmse:0.758591#011validation-rmse:4.17206[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 30 pruned nodes, max_depth=0[0m
[34m[101]#011train-rmse:0.758783#011validation-rmse:4.17148[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 18 pruned nodes, max_depth=0[0m
[34m[102]#011train-rmse:0.758742#011validation-rmse:4.17155[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 18 pruned nodes, max_depth=0[0m
[34m[103]#011train-rmse:0.75866#011validation-rmse:4.17174[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 22 pruned nodes, max_depth=0[0m
[34m[104]#011train-rmse:0.758732#011validation-rmse:4.17157[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 22 pruned nodes, max_depth=0[0m
[34m[105]#011train-rmse:0.758738#011validation-rmse:4.17156[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 28 pruned nodes, max_depth=0[0m
[34m[106]#011train-rmse:0.758688#011validation-rmse:4.17167[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 12 pruned nodes, max_depth=0[0m
[34m[107]#011train-rmse:0.758651#011validation-rmse:4.17177[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 20 pruned nodes, max_depth=0[0m
[34m[108]#011train-rmse:0.758617#011validation-rmse:4.17189[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 22 pruned nodes, max_depth=0[0m
[34m[109]#011train-rmse:0.758656#011validation-rmse:4.17175[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 20 pruned nodes, max_depth=0[0m
[34m[110]#011train-rmse:0.758608#011validation-rmse:4.17193[0m
[34m[11:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 14 pruned nodes, max_depth=0[0m
[34m[111]#011train-rmse:0.758597#011validation-rmse:4.17201[0m
[34mStopping. Best iteration:[0m
[34m[101]#011train-rmse:0.758783#011validation-rmse:4.17148
[0m
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
model_info
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
...........................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
_____no_output_____
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
WARNING:root:There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='0.90-1'. For example:
get_image_uri(region, 'xgboost', '0.90-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2020-05-02 22:08:08 Starting - Starting the training job...
2020-05-02 22:08:09 Starting - Launching requested ML instances...
2020-05-02 22:09:08 Starting - Preparing the instances for training......
2020-05-02 22:10:04 Downloading - Downloading input data...
2020-05-02 22:10:39 Training - Downloading the training image...
2020-05-02 22:11:04 Uploading - Uploading generated training model[34mArguments: train[0m
[34m[2020-05-02:22:10:59:INFO] Running standalone xgboost training.[0m
[34m[2020-05-02:22:10:59:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8497.86mb[0m
[34m[2020-05-02:22:10:59:INFO] Determined delimiter of CSV input is ','[0m
[34m[22:10:59] S3DistributionType set as FullyReplicated[0m
[34m[22:10:59] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2020-05-02:22:10:59:INFO] Determined delimiter of CSV input is ','[0m
[34m[22:10:59] S3DistributionType set as FullyReplicated[0m
[34m[22:10:59] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[0]#011train-rmse:19.2307#011validation-rmse:19.2798[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 2 pruned nodes, max_depth=2[0m
[34m[1]#011train-rmse:15.6953#011validation-rmse:15.7478[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[2]#011train-rmse:12.854#011validation-rmse:12.9049[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[3]#011train-rmse:10.6507#011validation-rmse:10.7388[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=3[0m
[34m[4]#011train-rmse:8.8049#011validation-rmse:8.7754[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[5]#011train-rmse:7.37861#011validation-rmse:7.39511[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[6]#011train-rmse:6.21183#011validation-rmse:6.2564[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[7]#011train-rmse:5.25736#011validation-rmse:5.3666[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[8]#011train-rmse:4.50871#011validation-rmse:4.73579[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[9]#011train-rmse:3.93617#011validation-rmse:4.2545[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[10]#011train-rmse:3.52263#011validation-rmse:3.90233[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[11]#011train-rmse:3.18702#011validation-rmse:3.58303[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[12]#011train-rmse:2.91629#011validation-rmse:3.33631[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[13]#011train-rmse:2.72921#011validation-rmse:3.20549[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14]#011train-rmse:2.53822#011validation-rmse:3.08466[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[15]#011train-rmse:2.3536#011validation-rmse:3.074[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[16]#011train-rmse:2.25115#011validation-rmse:2.97297[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[17]#011train-rmse:2.185#011validation-rmse:2.92042[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[18]#011train-rmse:2.09413#011validation-rmse:2.9258[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[19]#011train-rmse:2.0184#011validation-rmse:2.91563[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[20]#011train-rmse:1.95454#011validation-rmse:2.9026[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[21]#011train-rmse:1.8989#011validation-rmse:2.91316[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[22]#011train-rmse:1.8452#011validation-rmse:2.89332[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[23]#011train-rmse:1.81458#011validation-rmse:2.90113[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[24]#011train-rmse:1.76526#011validation-rmse:2.88366[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[25]#011train-rmse:1.71932#011validation-rmse:2.85001[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[26]#011train-rmse:1.66739#011validation-rmse:2.86277[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[27]#011train-rmse:1.62065#011validation-rmse:2.84214[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[28]#011train-rmse:1.59716#011validation-rmse:2.84697[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[29]#011train-rmse:1.56404#011validation-rmse:2.82948[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[30]#011train-rmse:1.53829#011validation-rmse:2.81254[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[31]#011train-rmse:1.50763#011validation-rmse:2.79211[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[32]#011train-rmse:1.47836#011validation-rmse:2.79809[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[33]#011train-rmse:1.43734#011validation-rmse:2.78985[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[34]#011train-rmse:1.39943#011validation-rmse:2.78091[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[35]#011train-rmse:1.37801#011validation-rmse:2.77396[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[36]#011train-rmse:1.32741#011validation-rmse:2.76484[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[37]#011train-rmse:1.27486#011validation-rmse:2.75847[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[38]#011train-rmse:1.22905#011validation-rmse:2.75127[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[39]#011train-rmse:1.21517#011validation-rmse:2.75664[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[40]#011train-rmse:1.18651#011validation-rmse:2.75255[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[41]#011train-rmse:1.16582#011validation-rmse:2.76559[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[42]#011train-rmse:1.1563#011validation-rmse:2.75285[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[43]#011train-rmse:1.13127#011validation-rmse:2.76029[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=4[0m
[34m[44]#011train-rmse:1.11477#011validation-rmse:2.76951[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[45]#011train-rmse:1.09016#011validation-rmse:2.7656[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[46]#011train-rmse:1.08164#011validation-rmse:2.77088[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[47]#011train-rmse:1.05196#011validation-rmse:2.77749[0m
[34m[22:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[48]#011train-rmse:1.03439#011validation-rmse:2.78186[0m
[34mStopping. Best iteration:[0m
[34m[38]#011train-rmse:1.22905#011validation-rmse:2.75127
[0m
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
..............................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
Completed 2.3 KiB/2.3 KiB (36.0 KiB/s) with 1 file(s) remaining
download: s3://sagemaker-us-west-2-202593872157/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
预测波士顿房价 在 SageMaker 中使用 XGBoost(批转换)_机器学习工程师纳米学位课程 | 开发_---为了介绍 SageMaker 的低阶 Python API,我们将查看一个相对简单的问题。我们将使用[波士顿房价数据集](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html)预测波士顿地区的房价中位数。此 notebook 中使用的 API 的参考文档位于 [SageMaker 开发人员指南](https://docs.aws.amazon.com/sagemaker/latest/dg/)页面 一般步骤通常,在 notebook 实例中使用 SageMaker 时,你需要完成以下步骤。当然,并非每个项目都要完成每一步。此外,有很多步骤有很大的变化余地,你将在这些课程中发现这一点。1. 下载或检索数据。2. 处理/准备数据。3. 将处理的数据上传到 S3。4. 训练所选的模型。5. 测试训练的模型(通常使用批转换作业)。6. 部署训练的模型。7. 使用部署的模型。在此 notebook 中,我们将仅介绍第 1-5 步,因为只是大致了解如何使用 SageMaker。在后面的 notebook 中,我们将详细介绍如何部署训练的模型。 第 0 步:设置 notebook先进行必要的设置以运行 notebook。首先,加载所需的所有 Python 模块。
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
除了上面的模块之外,我们还需要导入将使用的各种 SageMaker 模块。
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
第 1 步:下载数据幸运的是,我们可以使用 sklearn 检索数据集,所以这一步相对比较简单。
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
第 2 步:准备和拆分数据因为使用的是整洁的表格数据,所以不需要进行任何处理。但是,我们需要将数据集中的各行拆分成训练集、测试集和验证集。
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
第 3 步:将数据文件上传到 S3使用 SageMaker 创建训练作业后,进行训练操作的容器会执行。此容器可以访问存储在 S3 上的数据。所以我们需要将用来训练的数据上传到 S3。此外,在执行批转换作业时,SageMaker 要求输入数据存储在 S3 上。我们可以使用 SageMaker API 完成这一步,它会在后台自动处理完一些步骤。 将数据保存到本地首先,我们需要创建测试、训练和验证 csv 文件,并将这些文件上传到 S3。
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
上传到 S3因为目前正在 SageMaker 会话中运行,所以可以使用代表此会话的对象将数据上传到默认的 S3 存储桶中。注意,建议提供自定义 prefix(即 S3 文件夹),以防意外地破坏了其他 notebook 或项目上传的数据。
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
第 4 步:训练和构建 XGBoost 模型将训练和验证数据上传到 S3 后,我们可以为 XGBoost 模型创建训练作业并构建模型本身了。 设置训练作业首先,我们将为模型设置和执行训练作业。我们需要指定一些信息,供 SageMaker 设置和正确地执行计算过程。要查看构建训练作业的其他文档,请参阅 [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) 参考文档。
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
WARNING:root:There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='0.90-1'. For example:
get_image_uri(region, 'xgboost', '0.90-1').
###Markdown
执行训练作业构建了包含训练作业参数的字典对象后,我们可以要求 SageMaker 执行训练作业了。
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
SageMaker 已经创建了训练作业,并且训练作业现在正在运行中。因为我们需要获得训练作业的输出,所以需要等待运行完毕。我们可以要求 SageMaker 输出训练作业生成的日志,并继续要求输出日志,直到训练作业运行完毕。
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2020-03-25 12:01:56 Starting - Starting the training job...
2020-03-25 12:01:58 Starting - Launching requested ML instances......
2020-03-25 12:02:57 Starting - Preparing the instances for training...
2020-03-25 12:03:56 Downloading - Downloading input data...
2020-03-25 12:04:12 Training - Downloading the training image.[34mArguments: train[0m
[34m[2020-03-25:12:04:32:INFO] Running standalone xgboost training.[0m
[34m[2020-03-25:12:04:32:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8512.15mb[0m
[34m[2020-03-25:12:04:32:INFO] Determined delimiter of CSV input is ','[0m
[34m[12:04:32] S3DistributionType set as FullyReplicated[0m
[34m[12:04:32] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2020-03-25:12:04:32:INFO] Determined delimiter of CSV input is ','[0m
[34m[12:04:32] S3DistributionType set as FullyReplicated[0m
[34m[12:04:32] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[12:04:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[0]#011train-rmse:19.7599#011validation-rmse:19.4162[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[12:04:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[1]#011train-rmse:16.1442#011validation-rmse:15.6447[0m
[34m[12:04:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[2]#011train-rmse:13.2498#011validation-rmse:12.7063[0m
[34m[12:04:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[3]#011train-rmse:10.8703#011validation-rmse:10.3862[0m
[34m[12:04:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[4]#011train-rmse:9.02165#011validation-rmse:8.54403[0m
[34m[12:04:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[5]#011train-rmse:7.4914#011validation-rmse:7.11418[0m
[34m[12:04:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[6]#011train-rmse:6.2757#011validation-rmse:5.95186[0m
[34m[12:04:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[7]#011train-rmse:5.39218#011validation-rmse:5.16752[0m
[34m[12:04:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[8]#011train-rmse:4.62014#011validation-rmse:4.5354[0m
[34m[12:04:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[9]#011train-rmse:4.04346#011validation-rmse:4.10085[0m
[34m[12:04:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[10]#011train-rmse:3.54176#011validation-rmse:3.82729[0m
[34m[12:04:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[11]#011train-rmse:3.14758#011validation-rmse:3.61492[0m
[34m[12:04:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[12]#011train-rmse:2.85772#011validation-rmse:3.44182[0m
[34m[12:04:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[13]#011train-rmse:2.69096#011validation-rmse:3.39702[0m
[34m[12:04:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14]#011train-rmse:2.52231#011validation-rmse:3.35583[0m
[34m[12:04:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[15]#011train-rmse:2.37769#011validation-rmse:3.32126[0m
[34m[12:04:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[16]#011train-rmse:2.27773#011validation-rmse:3.29283[0m
[34m[12:04:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[17]#011train-rmse:2.19906#011validation-rmse:3.26808[0m
[34m[12:04:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[18]#011train-rmse:2.09917#011validation-rmse:3.28341[0m
[34m[12:04:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[19]#011train-rmse:2.03807#011validation-rmse:3.29325[0m
[34m[12:04:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[20]#011train-rmse:1.97834#011validation-rmse:3.31877[0m
[34m[12:04:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[21]#011train-rmse:1.93743#011validation-rmse:3.3262[0m
[34m[12:04:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[22]#011train-rmse:1.88159#011validation-rmse:3.33861[0m
[34m[12:04:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[23]#011train-rmse:1.85367#011validation-rmse:3.35458[0m
[34m[12:04:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[24]#011train-rmse:1.81234#011validation-rmse:3.38529[0m
[34m[12:04:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[25]#011train-rmse:1.77337#011validation-rmse:3.40064[0m
[34m[12:04:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[26]#011train-rmse:1.70221#011validation-rmse:3.37277[0m
[34m[12:04:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[27]#011train-rmse:1.65329#011validation-rmse:3.36636[0m
[34mStopping. Best iteration:[0m
[34m[17]#011train-rmse:2.19906#011validation-rmse:3.26808
[0m
2020-03-25 12:04:44 Uploading - Uploading generated training model
2020-03-25 12:04:44 Completed - Training job completed
Training seconds: 48
Billable seconds: 48
###Markdown
构建模型训练作业运行完毕后,我们可以使用一些模型工件构建模型。注意,我们说的模型是 SageMaker 所定义的模型,即关于特定算法及其训练作业生成的工件的信息集合。
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
第 5 步:测试模型将模型拟合训练数据并使用验证数据避免过拟合后,我们可以测试模型了。我们将使用 SageMaker 的批转换功能。也就是说,我们需要设置和执行批转换作业,与之前构建训练作业的方式相似。 设置批转换作业就像训练模型一样,我们首先需要提供一些信息,并且所采用的数据结构描述了我们要执行的批转换作业。我们将仅使用这里可用的某些选项,如果你想了解其他选项,请参阅[创建批转换作业](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html) SageMaker 文档。
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
执行批转换作业创建了请求数据结构后,下面要求 SageMaker 设置和运行批转换作业。与之前的步骤一样,SageMaker 会在后台执行这些任务,如果你想等待转换作业运行完毕(并查看作业的进度),可以调用 wait() 方法来等待转换作业运行完毕。
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
................................................!
###Markdown
分析结果现在转换作业已经运行完毕,结果按照我们的要求存储到了 S3 上。因为我们想要在 notebook 中分析输出结果,所以将使用一个 notebook 功能将输出文件从 S3 复制到本地。
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
Completed 2.3 KiB/2.3 KiB (36.4 KiB/s) with 1 file(s) remaining
download: s3://sagemaker-us-west-1-270372225889/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
为了查看模型的运行效果,我们可以绘制一个简单的预测值与真实值散点图。如果模型的预测完全准确的话,那么散点图将是一条直线 $x=y$。可以看出,我们的模型表现不错,但是还有改进的余地。
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
可选步骤:清理数据SageMaker 上的默认 notebook 实例没有太多的可用磁盘空间。当你继续完成和执行 notebook 时,最终会耗尽磁盘空间,导致难以诊断的错误。完全使用完 notebook 后,建议删除创建的文件。你可以从终端或 notebook hub 删除文件。以下单元格中包含了从 notebook 内清理文件的命令。
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
WARNING:root:There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='0.90-1'. For example:
get_image_uri(region, 'xgboost', '0.90-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2020-02-21 20:04:35 Starting - Launching requested ML instances.........
2020-02-21 20:05:40 Starting - Preparing the instances for training......
2020-02-21 20:07:06 Downloading - Downloading input data
2020-02-21 20:07:06 Training - Downloading the training image...
2020-02-21 20:07:31 Uploading - Uploading generated training model[34mArguments: train[0m
[34m[2020-02-21:20:07:26:INFO] Running standalone xgboost training.[0m
[34m[2020-02-21:20:07:26:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8516.32mb[0m
[34m[2020-02-21:20:07:26:INFO] Determined delimiter of CSV input is ','[0m
[34m[20:07:26] S3DistributionType set as FullyReplicated[0m
[34m[20:07:26] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2020-02-21:20:07:26:INFO] Determined delimiter of CSV input is ','[0m
[34m[20:07:26] S3DistributionType set as FullyReplicated[0m
[34m[20:07:26] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[0]#011train-rmse:19.3407#011validation-rmse:19.0174[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[1]#011train-rmse:15.883#011validation-rmse:15.4946[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[2]#011train-rmse:13.1174#011validation-rmse:12.7599[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=4[0m
[34m[3]#011train-rmse:10.8384#011validation-rmse:10.4784[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[4]#011train-rmse:8.99606#011validation-rmse:8.68877[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[5]#011train-rmse:7.50964#011validation-rmse:7.34865[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[6]#011train-rmse:6.34091#011validation-rmse:6.29398[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[7]#011train-rmse:5.39534#011validation-rmse:5.5726[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[8]#011train-rmse:4.65919#011validation-rmse:5.00838[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[9]#011train-rmse:4.08165#011validation-rmse:4.69255[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[10]#011train-rmse:3.68269#011validation-rmse:4.48584[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[11]#011train-rmse:3.23848#011validation-rmse:4.3016[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[12]#011train-rmse:2.97158#011validation-rmse:4.17103[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[13]#011train-rmse:2.75171#011validation-rmse:4.10891[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14]#011train-rmse:2.588#011validation-rmse:4.08587[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[15]#011train-rmse:2.42966#011validation-rmse:4.04782[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[16]#011train-rmse:2.31604#011validation-rmse:4.02502[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[17]#011train-rmse:2.23288#011validation-rmse:4.03895[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[18]#011train-rmse:2.09658#011validation-rmse:3.95389[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[19]#011train-rmse:2.03049#011validation-rmse:3.99598[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[20]#011train-rmse:1.9004#011validation-rmse:4.01009[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[21]#011train-rmse:1.84787#011validation-rmse:3.97603[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[22]#011train-rmse:1.7834#011validation-rmse:3.93685[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[23]#011train-rmse:1.74515#011validation-rmse:3.94556[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[24]#011train-rmse:1.69396#011validation-rmse:3.94697[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[25]#011train-rmse:1.67724#011validation-rmse:3.9326[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[26]#011train-rmse:1.64085#011validation-rmse:3.93988[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[27]#011train-rmse:1.60315#011validation-rmse:3.95097[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[28]#011train-rmse:1.55462#011validation-rmse:3.94253[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[29]#011train-rmse:1.52629#011validation-rmse:3.91638[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[30]#011train-rmse:1.50959#011validation-rmse:3.90042[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[31]#011train-rmse:1.48269#011validation-rmse:3.87452[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[32]#011train-rmse:1.46514#011validation-rmse:3.86291[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[33]#011train-rmse:1.45076#011validation-rmse:3.84581[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[34]#011train-rmse:1.429#011validation-rmse:3.85521[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[35]#011train-rmse:1.40798#011validation-rmse:3.86043[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[36]#011train-rmse:1.38405#011validation-rmse:3.84193[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=4[0m
[34m[37]#011train-rmse:1.36396#011validation-rmse:3.82321[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[38]#011train-rmse:1.32706#011validation-rmse:3.83098[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=4[0m
[34m[39]#011train-rmse:1.30123#011validation-rmse:3.83136[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[40]#011train-rmse:1.25246#011validation-rmse:3.82641[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[41]#011train-rmse:1.2314#011validation-rmse:3.82373[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[42]#011train-rmse:1.21903#011validation-rmse:3.81523[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 22 pruned nodes, max_depth=5[0m
[34m[43]#011train-rmse:1.18143#011validation-rmse:3.81824[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[44]#011train-rmse:1.13423#011validation-rmse:3.82986[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[45]#011train-rmse:1.1084#011validation-rmse:3.8285[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 10 pruned nodes, max_depth=1[0m
[34m[46]#011train-rmse:1.10604#011validation-rmse:3.82867[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[47]#011train-rmse:1.08667#011validation-rmse:3.81605[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[48]#011train-rmse:1.06894#011validation-rmse:3.82093[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 10 pruned nodes, max_depth=1[0m
[34m[49]#011train-rmse:1.06713#011validation-rmse:3.81823[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 18 pruned nodes, max_depth=4[0m
[34m[50]#011train-rmse:1.05705#011validation-rmse:3.8196[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 18 pruned nodes, max_depth=5[0m
[34m[51]#011train-rmse:1.03008#011validation-rmse:3.80318[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[52]#011train-rmse:1.00352#011validation-rmse:3.81009[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 14 pruned nodes, max_depth=3[0m
[34m[53]#011train-rmse:1.00117#011validation-rmse:3.80896[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 10 pruned nodes, max_depth=4[0m
[34m[54]#011train-rmse:0.994164#011validation-rmse:3.79694[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 10 pruned nodes, max_depth=0[0m
[34m[55]#011train-rmse:0.993903#011validation-rmse:3.79872[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[56]#011train-rmse:0.985012#011validation-rmse:3.80578[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[57]#011train-rmse:0.972531#011validation-rmse:3.82378[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 24 pruned nodes, max_depth=3[0m
[34m[58]#011train-rmse:0.962757#011validation-rmse:3.81847[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 18 pruned nodes, max_depth=2[0m
[34m[59]#011train-rmse:0.957632#011validation-rmse:3.81004[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 18 pruned nodes, max_depth=0[0m
[34m[60]#011train-rmse:0.957404#011validation-rmse:3.81169[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[61]#011train-rmse:0.940639#011validation-rmse:3.8268[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 20 pruned nodes, max_depth=3[0m
[34m[62]#011train-rmse:0.933149#011validation-rmse:3.82682[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 22 pruned nodes, max_depth=1[0m
[34m[63]#011train-rmse:0.934859#011validation-rmse:3.82732[0m
[34m[20:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 22 pruned nodes, max_depth=0[0m
[34m[64]#011train-rmse:0.934834#011validation-rmse:3.82763[0m
[34mStopping. Best iteration:[0m
[34m[54]#011train-rmse:0.994164#011validation-rmse:3.79694
[0m
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
..................................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
Completed 2.3 KiB/2.3 KiB (35.9 KiB/s) with 1 file(s) remaining
download: s3://sagemaker-us-east-1-064263160711/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail.
###Code
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
###Output
_____no_output_____
###Markdown
Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
_____no_output_____
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
_____no_output_____
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
_____no_output_____
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
_____no_output_____
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
WARNING:root:There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='0.90-1'. For example:
get_image_uri(region, 'xgboost', '0.90-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2020-05-10 08:43:05 Starting - Launching requested ML instances......
2020-05-10 08:44:02 Starting - Preparing the instances for training......
2020-05-10 08:44:56 Downloading - Downloading input data...
2020-05-10 08:45:38 Training - Training image download completed. Training in progress.
2020-05-10 08:45:38 Uploading - Uploading generated training model[34mArguments: train[0m
[34m[2020-05-10:08:45:33:INFO] Running standalone xgboost training.[0m
[34m[2020-05-10:08:45:33:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8500.7mb[0m
[34m[2020-05-10:08:45:33:INFO] Determined delimiter of CSV input is ','[0m
[34m[08:45:33] S3DistributionType set as FullyReplicated[0m
[34m[08:45:33] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2020-05-10:08:45:33:INFO] Determined delimiter of CSV input is ','[0m
[34m[08:45:33] S3DistributionType set as FullyReplicated[0m
[34m[08:45:33] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[0]#011train-rmse:18.8353#011validation-rmse:19.893[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[1]#011train-rmse:15.3286#011validation-rmse:16.3259[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[2]#011train-rmse:12.5915#011validation-rmse:13.5732[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=4[0m
[34m[3]#011train-rmse:10.3672#011validation-rmse:11.2539[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[4]#011train-rmse:8.60857#011validation-rmse:9.45408[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[5]#011train-rmse:7.23136#011validation-rmse:8.00584[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[6]#011train-rmse:6.10573#011validation-rmse:6.78982[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[7]#011train-rmse:5.16556#011validation-rmse:5.8742[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[8]#011train-rmse:4.5043#011validation-rmse:5.31191[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[9]#011train-rmse:3.96361#011validation-rmse:4.84653[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[10]#011train-rmse:3.54639#011validation-rmse:4.4979[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[11]#011train-rmse:3.22889#011validation-rmse:4.18169[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[12]#011train-rmse:2.93492#011validation-rmse:3.91551[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[13]#011train-rmse:2.70601#011validation-rmse:3.79073[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14]#011train-rmse:2.54076#011validation-rmse:3.68977[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[15]#011train-rmse:2.42893#011validation-rmse:3.67625[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[16]#011train-rmse:2.29577#011validation-rmse:3.62804[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[17]#011train-rmse:2.16038#011validation-rmse:3.58688[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[18]#011train-rmse:2.10757#011validation-rmse:3.54799[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[19]#011train-rmse:2.03586#011validation-rmse:3.45832[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[20]#011train-rmse:1.97603#011validation-rmse:3.46212[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[21]#011train-rmse:1.92038#011validation-rmse:3.44023[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[22]#011train-rmse:1.89338#011validation-rmse:3.41632[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[23]#011train-rmse:1.8722#011validation-rmse:3.43454[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[24]#011train-rmse:1.84618#011validation-rmse:3.4244[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[25]#011train-rmse:1.82605#011validation-rmse:3.41896[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[26]#011train-rmse:1.75489#011validation-rmse:3.42088[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[27]#011train-rmse:1.69134#011validation-rmse:3.45165[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[28]#011train-rmse:1.64876#011validation-rmse:3.41681[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[29]#011train-rmse:1.61615#011validation-rmse:3.4216[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[30]#011train-rmse:1.59873#011validation-rmse:3.41558[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[31]#011train-rmse:1.57653#011validation-rmse:3.45118[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[32]#011train-rmse:1.53619#011validation-rmse:3.46866[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[33]#011train-rmse:1.50389#011validation-rmse:3.44585[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[34]#011train-rmse:1.48033#011validation-rmse:3.4396[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[35]#011train-rmse:1.43009#011validation-rmse:3.41167[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[36]#011train-rmse:1.42053#011validation-rmse:3.41057[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[37]#011train-rmse:1.37781#011validation-rmse:3.40161[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 4 pruned nodes, max_depth=4[0m
[34m[38]#011train-rmse:1.36834#011validation-rmse:3.41873[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[39]#011train-rmse:1.34344#011validation-rmse:3.42947[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 18 pruned nodes, max_depth=5[0m
[34m[40]#011train-rmse:1.30015#011validation-rmse:3.44921[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[41]#011train-rmse:1.25636#011validation-rmse:3.43298[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 6 pruned nodes, max_depth=3[0m
[34m[42]#011train-rmse:1.23585#011validation-rmse:3.43149[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 6 pruned nodes, max_depth=3[0m
[34m[43]#011train-rmse:1.23102#011validation-rmse:3.43525[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 10 pruned nodes, max_depth=1[0m
[34m[44]#011train-rmse:1.22764#011validation-rmse:3.44878[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 2 pruned nodes, max_depth=4[0m
[34m[45]#011train-rmse:1.21648#011validation-rmse:3.44402[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[46]#011train-rmse:1.20086#011validation-rmse:3.47402[0m
[34m[08:45:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[47]#011train-rmse:1.19296#011validation-rmse:3.4979[0m
[34mStopping. Best iteration:[0m
[34m[37]#011train-rmse:1.37781#011validation-rmse:3.40161
[0m
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
...............................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
Completed 2.3 KiB/2.3 KiB (39.9 KiB/s) with 1 file(s) remaining
download: s3://sagemaker-eu-central-1-293973958717/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
WARNING:root:There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='0.90-2'. For example:
get_image_uri(region, 'xgboost', '0.90-2').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2020-05-07 21:26:31 Starting - Launching requested ML instances...
2020-05-07 21:27:25 Starting - Preparing the instances for training......
2020-05-07 21:28:27 Downloading - Downloading input data...
2020-05-07 21:28:46 Training - Downloading the training image..[34mArguments: train[0m
[34m[2020-05-07:21:29:07:INFO] Running standalone xgboost training.[0m
[34m[2020-05-07:21:29:07:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8472.79mb[0m
[34m[2020-05-07:21:29:07:INFO] Determined delimiter of CSV input is ','[0m
[34m[21:29:07] S3DistributionType set as FullyReplicated[0m
[34m[21:29:07] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2020-05-07:21:29:07:INFO] Determined delimiter of CSV input is ','[0m
[34m[21:29:07] S3DistributionType set as FullyReplicated[0m
[34m[21:29:07] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[0]#011train-rmse:19.7177#011validation-rmse:20.3302[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[1]#011train-rmse:16.0914#011validation-rmse:16.5966[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[2]#011train-rmse:13.208#011validation-rmse:13.6444[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[3]#011train-rmse:10.8735#011validation-rmse:11.3928[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[4]#011train-rmse:9.04472#011validation-rmse:9.53416[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[5]#011train-rmse:7.5921#011validation-rmse:8.17102[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[6]#011train-rmse:6.44351#011validation-rmse:7.05853[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[7]#011train-rmse:5.52157#011validation-rmse:6.13293[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[8]#011train-rmse:4.86507#011validation-rmse:5.43614[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[9]#011train-rmse:4.36879#011validation-rmse:4.92825[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[10]#011train-rmse:3.88967#011validation-rmse:4.44923[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[11]#011train-rmse:3.5256#011validation-rmse:4.10511[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[12]#011train-rmse:3.24128#011validation-rmse:3.85509[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[13]#011train-rmse:3.04639#011validation-rmse:3.67319[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14]#011train-rmse:2.88315#011validation-rmse:3.55872[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[15]#011train-rmse:2.75257#011validation-rmse:3.46787[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[16]#011train-rmse:2.63261#011validation-rmse:3.40862[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[17]#011train-rmse:2.52412#011validation-rmse:3.35635[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[18]#011train-rmse:2.43228#011validation-rmse:3.30461[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[19]#011train-rmse:2.3286#011validation-rmse:3.23218[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[20]#011train-rmse:2.23289#011validation-rmse:3.20258[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[21]#011train-rmse:2.14962#011validation-rmse:3.23724[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[22]#011train-rmse:2.06265#011validation-rmse:3.22705[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[23]#011train-rmse:2.01332#011validation-rmse:3.26336[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[24]#011train-rmse:1.97914#011validation-rmse:3.20899[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[25]#011train-rmse:1.91775#011validation-rmse:3.15669[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[26]#011train-rmse:1.86409#011validation-rmse:3.13054[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[27]#011train-rmse:1.83825#011validation-rmse:3.12363[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[28]#011train-rmse:1.78549#011validation-rmse:3.17488[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[29]#011train-rmse:1.73854#011validation-rmse:3.11752[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[30]#011train-rmse:1.72038#011validation-rmse:3.1153[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[31]#011train-rmse:1.68445#011validation-rmse:3.13002[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[32]#011train-rmse:1.65387#011validation-rmse:3.12716[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[33]#011train-rmse:1.63642#011validation-rmse:3.13113[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[34]#011train-rmse:1.5675#011validation-rmse:3.07785[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[35]#011train-rmse:1.53601#011validation-rmse:3.09608[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[36]#011train-rmse:1.51381#011validation-rmse:3.06614[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[37]#011train-rmse:1.47165#011validation-rmse:3.05955[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[38]#011train-rmse:1.44324#011validation-rmse:3.01629[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[39]#011train-rmse:1.39828#011validation-rmse:2.99487[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[40]#011train-rmse:1.36604#011validation-rmse:2.97142[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[41]#011train-rmse:1.34434#011validation-rmse:2.9581[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[42]#011train-rmse:1.30995#011validation-rmse:2.96513[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[43]#011train-rmse:1.29653#011validation-rmse:2.95673[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[44]#011train-rmse:1.25615#011validation-rmse:2.93075[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[45]#011train-rmse:1.22642#011validation-rmse:2.93421[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[46]#011train-rmse:1.1864#011validation-rmse:2.92098[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 6 pruned nodes, max_depth=3[0m
[34m[47]#011train-rmse:1.17828#011validation-rmse:2.92156[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[48]#011train-rmse:1.1507#011validation-rmse:2.93359[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[49]#011train-rmse:1.11163#011validation-rmse:2.91388[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[50]#011train-rmse:1.09579#011validation-rmse:2.89824[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[51]#011train-rmse:1.0805#011validation-rmse:2.90241[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 20 pruned nodes, max_depth=2[0m
[34m[52]#011train-rmse:1.077#011validation-rmse:2.90275[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 10 pruned nodes, max_depth=2[0m
[34m[53]#011train-rmse:1.05924#011validation-rmse:2.92355[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 10 pruned nodes, max_depth=2[0m
[34m[54]#011train-rmse:1.05497#011validation-rmse:2.9359[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 10 pruned nodes, max_depth=0[0m
[34m[55]#011train-rmse:1.05498#011validation-rmse:2.93598[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[56]#011train-rmse:1.04614#011validation-rmse:2.93489[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 12 pruned nodes, max_depth=2[0m
[34m[57]#011train-rmse:1.03536#011validation-rmse:2.92588[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[58]#011train-rmse:1.01421#011validation-rmse:2.89811[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 16 pruned nodes, max_depth=1[0m
[34m[59]#011train-rmse:1.01264#011validation-rmse:2.89536[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[60]#011train-rmse:1.00028#011validation-rmse:2.89557[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 14 pruned nodes, max_depth=2[0m
[34m[61]#011train-rmse:0.994839#011validation-rmse:2.88563[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 6 pruned nodes, max_depth=3[0m
[34m[62]#011train-rmse:0.988257#011validation-rmse:2.89104[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[63]#011train-rmse:0.977341#011validation-rmse:2.8886[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 22 pruned nodes, max_depth=4[0m
[34m[64]#011train-rmse:0.970646#011validation-rmse:2.8903[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 10 pruned nodes, max_depth=3[0m
[34m[65]#011train-rmse:0.957126#011validation-rmse:2.89566[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 14 pruned nodes, max_depth=4[0m
[34m[66]#011train-rmse:0.950286#011validation-rmse:2.89285[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 20 pruned nodes, max_depth=3[0m
[34m[67]#011train-rmse:0.943306#011validation-rmse:2.89401[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 20 pruned nodes, max_depth=1[0m
[34m[68]#011train-rmse:0.942712#011validation-rmse:2.89585[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 10 pruned nodes, max_depth=1[0m
[34m[69]#011train-rmse:0.940752#011validation-rmse:2.89171[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 12 pruned nodes, max_depth=0[0m
[34m[70]#011train-rmse:0.94076#011validation-rmse:2.89183[0m
[34m[21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[71]#011train-rmse:0.933921#011validation-rmse:2.90102[0m
[34mStopping. Best iteration:[0m
[34m[61]#011train-rmse:0.994839#011validation-rmse:2.88563
[0m
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
........................................
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
_____no_output_____
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
'get_image_uri' method will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
WARNING:root:There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='1.0-1'. For example:
get_image_uri(region, 'xgboost', '1.0-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2020-07-19 21:53:30 Starting - Starting the training job...
2020-07-19 21:53:33 Starting - Launching requested ML instances......
2020-07-19 21:54:47 Starting - Preparing the instances for training......
2020-07-19 21:55:49 Downloading - Downloading input data
2020-07-19 21:55:49 Training - Downloading the training image..[34mArguments: train[0m
[34m[2020-07-19:21:56:10:INFO] Running standalone xgboost training.[0m
[34m[2020-07-19:21:56:10:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8480.23mb[0m
[34m[2020-07-19:21:56:10:INFO] Determined delimiter of CSV input is ','[0m
[34m[21:56:10] S3DistributionType set as FullyReplicated[0m
[34m[21:56:10] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2020-07-19:21:56:10:INFO] Determined delimiter of CSV input is ','[0m
[34m[21:56:10] S3DistributionType set as FullyReplicated[0m
[34m[21:56:10] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[0]#011train-rmse:19.3467#011validation-rmse:17.7594[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[1]#011train-rmse:15.6883#011validation-rmse:14.3884[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[2]#011train-rmse:12.8311#011validation-rmse:11.8712[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[3]#011train-rmse:10.4892#011validation-rmse:9.8473[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[4]#011train-rmse:8.72349#011validation-rmse:8.38292[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[5]#011train-rmse:7.21805#011validation-rmse:7.22144[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[6]#011train-rmse:6.02614#011validation-rmse:6.40139[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[7]#011train-rmse:5.09863#011validation-rmse:5.79526[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[8]#011train-rmse:4.35667#011validation-rmse:5.34028[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[9]#011train-rmse:3.74467#011validation-rmse:5.05733[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[10]#011train-rmse:3.2672#011validation-rmse:4.90879[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[11]#011train-rmse:2.91745#011validation-rmse:4.79682[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[12]#011train-rmse:2.63003#011validation-rmse:4.7102[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[13]#011train-rmse:2.41156#011validation-rmse:4.66747[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[14]#011train-rmse:2.20116#011validation-rmse:4.62003[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[15]#011train-rmse:2.06944#011validation-rmse:4.59941[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[16]#011train-rmse:1.93426#011validation-rmse:4.58421[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[17]#011train-rmse:1.84549#011validation-rmse:4.55751[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[18]#011train-rmse:1.76262#011validation-rmse:4.56733[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[19]#011train-rmse:1.70266#011validation-rmse:4.56357[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[20]#011train-rmse:1.6513#011validation-rmse:4.55152[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[21]#011train-rmse:1.6143#011validation-rmse:4.53833[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[22]#011train-rmse:1.53083#011validation-rmse:4.58605[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[23]#011train-rmse:1.46451#011validation-rmse:4.58929[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[24]#011train-rmse:1.42364#011validation-rmse:4.58461[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[25]#011train-rmse:1.39706#011validation-rmse:4.5792[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[26]#011train-rmse:1.36797#011validation-rmse:4.56942[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[27]#011train-rmse:1.34671#011validation-rmse:4.56818[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[28]#011train-rmse:1.30489#011validation-rmse:4.55476[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[29]#011train-rmse:1.2652#011validation-rmse:4.58462[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[30]#011train-rmse:1.24059#011validation-rmse:4.57373[0m
[34m[21:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[31]#011train-rmse:1.19904#011validation-rmse:4.57499[0m
[34mStopping. Best iteration:[0m
[34m[21]#011train-rmse:1.6143#011validation-rmse:4.53833
[0m
2020-07-19 21:56:22 Uploading - Uploading generated training model
2020-07-19 21:56:22 Completed - Training job completed
Training seconds: 46
Billable seconds: 46
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
...........................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
Completed 2.3 KiB/2.3 KiB (32.5 KiB/s) with 1 file(s) remaining
download: s3://sagemaker-us-west-1-002178010120/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
'get_image_uri' method will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='1.0-1'. For example:
get_image_uri(region, 'xgboost', '1.0-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2020-09-11 20:17:00 Starting - Launching requested ML instances......
2020-09-11 20:18:06 Starting - Preparing the instances for training......
2020-09-11 20:19:10 Downloading - Downloading input data
2020-09-11 20:19:10 Training - Downloading the training image...[34mArguments: train[0m
[34m[2020-09-11:20:19:29:INFO] Running standalone xgboost training.[0m
[34m[2020-09-11:20:19:29:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8474.43mb[0m
[34m[2020-09-11:20:19:29:INFO] Determined delimiter of CSV input is ','[0m
[34m[20:19:29] S3DistributionType set as FullyReplicated[0m
[34m[20:19:30] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2020-09-11:20:19:30:INFO] Determined delimiter of CSV input is ','[0m
[34m[20:19:30] S3DistributionType set as FullyReplicated[0m
[34m[20:19:30] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[20:19:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[0]#011train-rmse:19.9202#011validation-rmse:18.2153[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[20:19:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[1]#011train-rmse:16.2601#011validation-rmse:14.7372[0m
[34m[20:19:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[2]#011train-rmse:13.2722#011validation-rmse:11.9922[0m
[34m[20:19:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=4[0m
[34m[3]#011train-rmse:10.9729#011validation-rmse:9.77459[0m
[34m[20:19:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[4]#011train-rmse:9.03913#011validation-rmse:8.03612[0m
[34m[20:19:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[5]#011train-rmse:7.58958#011validation-rmse:6.71947[0m
[34m[20:19:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[6]#011train-rmse:6.38707#011validation-rmse:5.68509[0m
[34m[20:19:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[7]#011train-rmse:5.40916#011validation-rmse:4.89422[0m
[34m[20:19:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[8]#011train-rmse:4.643#011validation-rmse:4.29884[0m
[34m[20:19:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[9]#011train-rmse:4.0599#011validation-rmse:3.88442[0m
[34m[20:19:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[10]#011train-rmse:3.59246#011validation-rmse:3.58347[0m
[34m[20:19:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[11]#011train-rmse:3.2686#011validation-rmse:3.41187[0m
[34m[20:19:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[12]#011train-rmse:2.94417#011validation-rmse:3.28721[0m
[34m[20:19:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[13]#011train-rmse:2.73047#011validation-rmse:3.216[0m
[34m[20:19:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[14]#011train-rmse:2.55338#011validation-rmse:3.16941[0m
[34m[20:19:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[15]#011train-rmse:2.40462#011validation-rmse:3.17964[0m
[34m[20:19:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[16]#011train-rmse:2.2998#011validation-rmse:3.15351[0m
[34m[20:19:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[17]#011train-rmse:2.21249#011validation-rmse:3.17482[0m
[34m[20:19:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[18]#011train-rmse:2.06855#011validation-rmse:3.13618[0m
[34m[20:19:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[19]#011train-rmse:2.01064#011validation-rmse:3.17999[0m
[34m[20:19:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[20]#011train-rmse:1.95868#011validation-rmse:3.23042[0m
[34m[20:19:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[21]#011train-rmse:1.89637#011validation-rmse:3.28951[0m
[34m[20:19:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[22]#011train-rmse:1.86407#011validation-rmse:3.3486[0m
[34m[20:19:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[23]#011train-rmse:1.81664#011validation-rmse:3.37921[0m
[34m[20:19:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[24]#011train-rmse:1.77335#011validation-rmse:3.39714[0m
[34m[20:19:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[25]#011train-rmse:1.7608#011validation-rmse:3.43526[0m
[34m[20:19:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[26]#011train-rmse:1.74057#011validation-rmse:3.4712[0m
[34m[20:19:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[27]#011train-rmse:1.71539#011validation-rmse:3.5301[0m
[34m[20:19:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[28]#011train-rmse:1.66715#011validation-rmse:3.56762[0m
[34mStopping. Best iteration:[0m
[34m[18]#011train-rmse:2.06855#011validation-rmse:3.13618
[0m
2020-09-11 20:19:59 Uploading - Uploading generated training model
2020-09-11 20:19:59 Completed - Training job completed
Training seconds: 64
Billable seconds: 64
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
.........................................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
Completed 2.3 KiB/2.3 KiB (37.3 KiB/s) with 1 file(s) remaining
download: s3://sagemaker-us-east-2-444100773610/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____
###Markdown
Predicting Boston Housing Prices Using XGBoost in SageMaker (Batch Transform)_Deep Learning Nanodegree Program | Deployment_---As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/) General OutlineTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail.
###Code
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
###Output
Requirement already satisfied: sagemaker==1.72.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (1.72.0)
Requirement already satisfied: scipy>=0.19.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.4.1)
Requirement already satisfied: boto3>=1.14.12 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.16.37)
Requirement already satisfied: packaging>=20.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (20.7)
Requirement already satisfied: protobuf>=3.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.14.0)
Requirement already satisfied: numpy>=1.9.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.19.4)
Requirement already satisfied: smdebug-rulesconfig==0.1.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.4)
Requirement already satisfied: protobuf3-to-dict>=0.1.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.5)
Requirement already satisfied: importlib-metadata>=1.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.1.0)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.10.0)
Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.3.3)
Requirement already satisfied: botocore<1.20.0,>=1.19.37 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (1.19.37)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.10.0)
Requirement already satisfied: urllib3<1.27,>=1.25.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.20.0,>=1.19.37->boto3>=1.14.12->sagemaker==1.72.0) (1.25.11)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.20.0,>=1.19.37->boto3>=1.14.12->sagemaker==1.72.0) (2.8.1)
Requirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.4.0)
Requirement already satisfied: pyparsing>=2.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging>=20.0->sagemaker==1.72.0) (2.4.7)
Requirement already satisfied: six>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (1.15.0)
Requirement already satisfied: six>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (1.15.0)
Requirement already satisfied: protobuf>=3.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.14.0)
Requirement already satisfied: six>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (1.15.0)
Requirement already satisfied: botocore<1.20.0,>=1.19.37 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (1.19.37)
Requirement already satisfied: numpy>=1.9.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.19.4)
[33mWARNING: You are using pip version 20.3; however, version 20.3.3 is available.
You should consider upgrading via the '/home/ec2-user/anaconda3/envs/pytorch_p36/bin/python -m pip install --upgrade pip' command.[0m
###Markdown
Step 0: Setting up the notebookWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
###Code
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
###Code
boston = load_boston()
###Output
_____no_output_____
###Markdown
Step 2: Preparing and splitting the dataGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
###Code
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Step 3: Uploading the data files to S3When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. Save the data locallyFirst we need to create the test, train and validation csv files which we will then upload to S3.
###Code
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Upload to S3Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
###Code
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Step 4: Train and construct the XGBoost modelNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself. Set up the training jobFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
###Code
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
###Output
'get_image_uri' method will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='1.0-1'. For example:
get_image_uri(region, 'xgboost', '1.0-1').
###Markdown
Execute the training jobNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
###Code
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
print(training_job_name)
###Output
_____no_output_____
###Markdown
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
###Code
session.logs_for_job(training_job_name, wait=True)
###Output
2020-12-25 20:24:54 Starting - Launching requested ML instances......
2020-12-25 20:26:06 Starting - Preparing the instances for training......
2020-12-25 20:27:06 Downloading - Downloading input data...
2020-12-25 20:27:36 Training - Downloading the training image..[34mArguments: train[0m
[34m[2020-12-25:20:27:56:INFO] Running standalone xgboost training.[0m
[34m[2020-12-25:20:27:56:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8446.43mb[0m
[34m[2020-12-25:20:27:57:INFO] Determined delimiter of CSV input is ','[0m
[34m[20:27:56] S3DistributionType set as FullyReplicated[0m
[34m[20:27:57] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2020-12-25:20:27:57:INFO] Determined delimiter of CSV input is ','[0m
[34m[20:27:57] S3DistributionType set as FullyReplicated[0m
[34m[20:27:57] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[0]#011train-rmse:19.6838#011validation-rmse:19.6938[0m
[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.
[0m
[34mWill train until validation-rmse hasn't improved in 10 rounds.[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 0 pruned nodes, max_depth=3[0m
[34m[1]#011train-rmse:16.0449#011validation-rmse:16.276[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=4[0m
[34m[2]#011train-rmse:13.1728#011validation-rmse:13.6727[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[3]#011train-rmse:10.8237#011validation-rmse:11.5327[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[4]#011train-rmse:8.90937#011validation-rmse:9.93265[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[5]#011train-rmse:7.40679#011validation-rmse:8.68047[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[6]#011train-rmse:6.18892#011validation-rmse:7.67515[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[7]#011train-rmse:5.23222#011validation-rmse:6.96975[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[8]#011train-rmse:4.52048#011validation-rmse:6.46143[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[9]#011train-rmse:3.93401#011validation-rmse:6.06267[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[10]#011train-rmse:3.46515#011validation-rmse:5.73353[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[11]#011train-rmse:3.06799#011validation-rmse:5.49811[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[12]#011train-rmse:2.79937#011validation-rmse:5.32683[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[13]#011train-rmse:2.58865#011validation-rmse:5.18217[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[14]#011train-rmse:2.44436#011validation-rmse:5.07863[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[15]#011train-rmse:2.30881#011validation-rmse:4.99296[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[16]#011train-rmse:2.15163#011validation-rmse:4.85158[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[17]#011train-rmse:2.06106#011validation-rmse:4.80259[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[18]#011train-rmse:1.97267#011validation-rmse:4.77777[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[19]#011train-rmse:1.89669#011validation-rmse:4.73542[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[20]#011train-rmse:1.83086#011validation-rmse:4.66811[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[21]#011train-rmse:1.77446#011validation-rmse:4.6577[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[22]#011train-rmse:1.7519#011validation-rmse:4.6556[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[23]#011train-rmse:1.71544#011validation-rmse:4.63806[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[24]#011train-rmse:1.69065#011validation-rmse:4.63754[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[25]#011train-rmse:1.61093#011validation-rmse:4.56899[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 4 pruned nodes, max_depth=4[0m
[34m[26]#011train-rmse:1.59047#011validation-rmse:4.55545[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[27]#011train-rmse:1.51519#011validation-rmse:4.50797[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[28]#011train-rmse:1.48725#011validation-rmse:4.49324[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[29]#011train-rmse:1.44659#011validation-rmse:4.4695[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[30]#011train-rmse:1.41757#011validation-rmse:4.46789[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[31]#011train-rmse:1.3801#011validation-rmse:4.44741[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[32]#011train-rmse:1.34695#011validation-rmse:4.43577[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[33]#011train-rmse:1.32131#011validation-rmse:4.44525[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[34]#011train-rmse:1.30369#011validation-rmse:4.41602[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[35]#011train-rmse:1.28286#011validation-rmse:4.40856[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[36]#011train-rmse:1.25579#011validation-rmse:4.41625[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 12 pruned nodes, max_depth=4[0m
[34m[37]#011train-rmse:1.25347#011validation-rmse:4.42223[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[38]#011train-rmse:1.22902#011validation-rmse:4.44247[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[39]#011train-rmse:1.19195#011validation-rmse:4.41454[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[40]#011train-rmse:1.17548#011validation-rmse:4.41808[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 4 pruned nodes, max_depth=4[0m
[34m[41]#011train-rmse:1.16082#011validation-rmse:4.39869[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[42]#011train-rmse:1.15541#011validation-rmse:4.40806[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[43]#011train-rmse:1.13267#011validation-rmse:4.39782[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 14 pruned nodes, max_depth=1[0m
[34m[44]#011train-rmse:1.12865#011validation-rmse:4.39664[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=3[0m
[34m[45]#011train-rmse:1.10817#011validation-rmse:4.3879[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 8 pruned nodes, max_depth=3[0m
[34m[46]#011train-rmse:1.09963#011validation-rmse:4.38376[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 8 pruned nodes, max_depth=2[0m
[34m[47]#011train-rmse:1.08521#011validation-rmse:4.35795[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[48]#011train-rmse:1.07524#011validation-rmse:4.35804[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 18 pruned nodes, max_depth=3[0m
[34m[49]#011train-rmse:1.06213#011validation-rmse:4.34826[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 16 pruned nodes, max_depth=3[0m
[34m[50]#011train-rmse:1.05476#011validation-rmse:4.33446[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[51]#011train-rmse:1.04316#011validation-rmse:4.34995[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[52]#011train-rmse:1.01916#011validation-rmse:4.34363[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 12 pruned nodes, max_depth=1[0m
[34m[53]#011train-rmse:1.01599#011validation-rmse:4.33211[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 10 pruned nodes, max_depth=4[0m
[34m[54]#011train-rmse:1.01221#011validation-rmse:4.32679[0m
[34m[55]#011train-rmse:0.99631#011validation-rmse:4.32346[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[56]#011train-rmse:0.994212#011validation-rmse:4.3248[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 10 pruned nodes, max_depth=1[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 22 pruned nodes, max_depth=2[0m
[34m[57]#011train-rmse:0.989076#011validation-rmse:4.32656[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[58]#011train-rmse:0.976712#011validation-rmse:4.32343[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 18 pruned nodes, max_depth=0[0m
[34m[59]#011train-rmse:0.97671#011validation-rmse:4.32308[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 22 pruned nodes, max_depth=0[0m
[34m[60]#011train-rmse:0.976714#011validation-rmse:4.32354[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[61]#011train-rmse:0.966755#011validation-rmse:4.32494[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 10 pruned nodes, max_depth=2[0m
[34m[62]#011train-rmse:0.960617#011validation-rmse:4.32266[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 12 pruned nodes, max_depth=3[0m
[34m[63]#011train-rmse:0.947992#011validation-rmse:4.32936[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 14 pruned nodes, max_depth=4[0m
[34m[64]#011train-rmse:0.940161#011validation-rmse:4.31711[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 10 pruned nodes, max_depth=4[0m
[34m[65]#011train-rmse:0.926912#011validation-rmse:4.31693[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[66]#011train-rmse:0.909815#011validation-rmse:4.32625[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 10 pruned nodes, max_depth=0[0m
[34m[67]#011train-rmse:0.909833#011validation-rmse:4.3259[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 16 pruned nodes, max_depth=2[0m
[34m[68]#011train-rmse:0.902328#011validation-rmse:4.31678[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 18 pruned nodes, max_depth=3[0m
[34m[69]#011train-rmse:0.898375#011validation-rmse:4.31658[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 14 pruned nodes, max_depth=0[0m
[34m[70]#011train-rmse:0.898377#011validation-rmse:4.31656[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 12 pruned nodes, max_depth=0[0m
[34m[71]#011train-rmse:0.898661#011validation-rmse:4.31466[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 14 pruned nodes, max_depth=2[0m
[34m[72]#011train-rmse:0.892166#011validation-rmse:4.32099[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 10 pruned nodes, max_depth=1[0m
[34m[73]#011train-rmse:0.891775#011validation-rmse:4.32175[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 26 pruned nodes, max_depth=1[0m
[34m[74]#011train-rmse:0.890513#011validation-rmse:4.3195[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 18 pruned nodes, max_depth=0[0m
[34m[75]#011train-rmse:0.890605#011validation-rmse:4.31906[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 20 pruned nodes, max_depth=3[0m
[34m[76]#011train-rmse:0.875779#011validation-rmse:4.30263[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[77]#011train-rmse:0.849814#011validation-rmse:4.3018[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=4[0m
[34m[78]#011train-rmse:0.836957#011validation-rmse:4.29829[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 20 pruned nodes, max_depth=2[0m
[34m[79]#011train-rmse:0.828574#011validation-rmse:4.29612[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 12 pruned nodes, max_depth=0[0m
[34m[80]#011train-rmse:0.828605#011validation-rmse:4.29577[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 18 pruned nodes, max_depth=0[0m
[34m[81]#011train-rmse:0.828521#011validation-rmse:4.29722[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 24 pruned nodes, max_depth=2[0m
[34m[82]#011train-rmse:0.825051#011validation-rmse:4.29514[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 16 pruned nodes, max_depth=0[0m
[34m[83]#011train-rmse:0.825053#011validation-rmse:4.29523[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 16 pruned nodes, max_depth=0[0m
[34m[84]#011train-rmse:0.825083#011validation-rmse:4.29594[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 16 pruned nodes, max_depth=0[0m
[34m[85]#011train-rmse:0.825155#011validation-rmse:4.29679[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 20 pruned nodes, max_depth=0[0m
[34m[86]#011train-rmse:0.825175#011validation-rmse:4.29697[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 32 pruned nodes, max_depth=0[0m
[34m[87]#011train-rmse:0.825154#011validation-rmse:4.29678[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 14 pruned nodes, max_depth=0[0m
[34m[88]#011train-rmse:0.82515#011validation-rmse:4.29675[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 22 pruned nodes, max_depth=2[0m
[34m[89]#011train-rmse:0.821079#011validation-rmse:4.29573[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 16 pruned nodes, max_depth=0[0m
[34m[90]#011train-rmse:0.82107#011validation-rmse:4.29564[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 18 pruned nodes, max_depth=0[0m
[34m[91]#011train-rmse:0.821028#011validation-rmse:4.29509[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 12 pruned nodes, max_depth=0[0m
[34m[92]#011train-rmse:0.820996#011validation-rmse:4.29426[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 16 pruned nodes, max_depth=4[0m
[34m[93]#011train-rmse:0.806356#011validation-rmse:4.29676[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 10 pruned nodes, max_depth=0[0m
[34m[94]#011train-rmse:0.806358#011validation-rmse:4.29682[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 22 pruned nodes, max_depth=0[0m
[34m[95]#011train-rmse:0.806366#011validation-rmse:4.29702[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 12 pruned nodes, max_depth=0[0m
[34m[96]#011train-rmse:0.806352#011validation-rmse:4.29663[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 22 pruned nodes, max_depth=0[0m
[34m[97]#011train-rmse:0.806419#011validation-rmse:4.29779[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[98]#011train-rmse:0.797875#011validation-rmse:4.30083[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 20 pruned nodes, max_depth=0[0m
[34m[99]#011train-rmse:0.797833#011validation-rmse:4.30047[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 22 pruned nodes, max_depth=0[0m
[34m[100]#011train-rmse:0.797821#011validation-rmse:4.30035[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 20 pruned nodes, max_depth=0[0m
[34m[101]#011train-rmse:0.797764#011validation-rmse:4.29964[0m
[34m[20:27:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 18 pruned nodes, max_depth=2[0m
[34m[102]#011train-rmse:0.793491#011validation-rmse:4.30222[0m
[34mStopping. Best iteration:[0m
[34m[92]#011train-rmse:0.820996#011validation-rmse:4.29426
[0m
###Markdown
Build the modelNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
###Code
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
###Output
_____no_output_____
###Markdown
Step 5: Testing the modelNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier. Set up the batch transform jobJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
###Code
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
###Output
_____no_output_____
###Markdown
Execute the batch transform jobNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
###Code
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
###Output
..........................................................!
###Markdown
Analyze the resultsNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
###Code
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
###Output
Completed 2.3 KiB/2.3 KiB (30.6 KiB/s) with 1 file(s) remaining
download: s3://sagemaker-us-east-1-542531091761/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out
###Markdown
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
###Code
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
###Output
_____no_output_____ |
examples/notebooks/EODC_Forum_2019/VITO.ipynb | ###Markdown
OpenEO Connection to VITO Backend
###Code
import openeo
import logging
from openeo.auth.auth_bearer import BearerAuth
logging.basicConfig(level=logging.INFO)
# Define constants
# Connection
VITO_DRIVER_URL = "http://openeo.vgt.vito.be/openeo/0.4.0"
OUTPUT_FILE = "/tmp/openeo_vito_output.tiff"
OUTFORMAT = "tiff"
# Data
PRODUCT_ID = "BIOPAR_FAPAR_V1_GLOBAL"
DATE_START = "2016-01-01T00:00:00Z"
DATE_END = "2016-03-10T23:59:59Z"
IMAGE_WEST = 16.138916
IMAGE_EAST = 16.524124
IMAGE_NORTH = 48.320647
IMAGE_SOUTH = 48.138600
IMAGE_SRS = "EPSG:4326"
# Processes
NDVI_RED = "B4"
NDVI_NIR = "B8A"
STRECH_COLORS_MIN = -1
STRECH_COLORS_MAX = 1
# Connect with VITO backend
connection = openeo.connect(VITO_DRIVER_URL)
connection
# Get available processes from the backend.
processes = connection.list_processes()
processes
# Retrieve the list of available collections
collections = connection.list_collections()
list(collections)[:2]
# Get detailed information about a collection
process = connection.describe_collection(PRODUCT_ID)
process
# Select collection product
datacube = connection.imagecollection(PRODUCT_ID)
print(datacube.to_json())
# Specifying the date range and the bounding box
datacube = datacube.filter_bbox(west=IMAGE_WEST, east=IMAGE_EAST, north=IMAGE_NORTH,
south=IMAGE_SOUTH, crs=IMAGE_SRS)
datacube = datacube.filter_daterange(extent=[DATE_START, DATE_END])
print(datacube.to_json())
# Applying some operations on the data
datacube = datacube.ndvi(red=NDVI_RED, nir=NDVI_NIR)
datacube = datacube.min_time()
print(datacube.to_json())
# Sending the job to the backend
job = datacube.create_job()
job.start_job()
job
# Describe Job
job.describe_job()
# Download job result
job.download_results(OUTPUT_FILE)
job
# Showing the result
from IPython.display import Image
result = Image(filename=OUTPUT_FILE)
result
#from PIL import Image
#resp2 = req.get(OUTPUT_FILE)
#resp2.raw.decode_content = True
#im = Image.open(resp2.raw)
#im
###Output
_____no_output_____
###Markdown
OpenEO Connection to VITO Backend
###Code
import openeo
import logging
from openeo.auth.auth_bearer import BearerAuth
logging.basicConfig(level=logging.INFO)
# Define constants
# Connection
VITO_DRIVER_URL = "http://openeo.vgt.vito.be/openeo/0.4.0"
OUTPUT_FILE = "/tmp/openeo_vito_output.tiff"
OUTFORMAT = "tiff"
# Data
PRODUCT_ID = "BIOPAR_FAPAR_V1_GLOBAL"
DATE_START = "2016-01-01T00:00:00Z"
DATE_END = "2016-03-10T23:59:59Z"
IMAGE_WEST = 16.138916
IMAGE_EAST = 16.524124
IMAGE_NORTH = 48.320647
IMAGE_SOUTH = 48.138600
IMAGE_SRS = "EPSG:4326"
# Processes
NDVI_RED = "B4"
NDVI_NIR = "B8A"
STRECH_COLORS_MIN = -1
STRECH_COLORS_MAX = 1
# Connect with VITO backend
connection = openeo.connect(VITO_DRIVER_URL)
connection
# Get available processes from the backend.
processes = connection.list_processes()
processes
# Retrieve the list of available collections
collections = connection.list_collections()
list(collections)[:2]
# Get detailed information about a collection
process = connection.describe_collection(PRODUCT_ID)
process
# Select collection product
datacube = connection.imagecollection(PRODUCT_ID)
print(datacube.to_json())
# Specifying the date range and the bounding box
datacube = datacube.filter_bbox(west=IMAGE_WEST, east=IMAGE_EAST, north=IMAGE_NORTH,
south=IMAGE_SOUTH, crs=IMAGE_SRS)
datacube = datacube.filter_daterange(extent=[DATE_START, DATE_END])
print(datacube.to_json())
# Applying some operations on the data
datacube = datacube.ndvi(red=NDVI_RED, nir=NDVI_NIR)
datacube = datacube.min_time()
print(datacube.to_json())
# Sending the job to the backend
job = datacube.send_job()
job.start_job()
job
# Describe Job
job.describe_job()
# Download job result
job.download_results(OUTPUT_FILE)
job
# Showing the result
from IPython.display import Image
result = Image(filename=OUTPUT_FILE)
result
#from PIL import Image
#resp2 = req.get(OUTPUT_FILE)
#resp2.raw.decode_content = True
#im = Image.open(resp2.raw)
#im
###Output
_____no_output_____ |
Neural network using numpy.ipynb | ###Markdown
Testing on diabetes dataset
###Code
import pandas as pd
data = pd.read_csv('diabetes.csv')
X = data.iloc[:,:-1]
X = np.array(X)
y = data.iloc[:,-1]
y = np.array(y)
y[y=='positive']=1.
y[y=='negative']=0.
y = np.array(y,dtype=np.float64)
y = y.reshape(len(y),1)
print(X.shape)
print(y.shape)
layer_dims = [X.shape[1],10,10,y.shape[1]]
parameters,grads = NN_model(X,y,1000,layer_dims,learning_rate=0.001)
###Output
_____no_output_____ |
docs/T697871_Black_box_Attack_API.ipynb | ###Markdown
Black-box Attack API
###Code
!git clone https://github.com/Yueeeeeeee/RecSys-Extraction-Attack.git
%cd RecSys-Extraction-Attack/
!apt-get install libarchive-dev
!pip install faiss-cpu --no-cache
!apt-get install libomp-dev
!pip install wget
!pip install libarchive
def zero_gradients(x):
if isinstance(x, torch.Tensor):
if x.grad is not None:
x.grad.detach_()
x.grad.zero_()
elif isinstance(x, collections.abc.Iterable):
for elem in x:
zero_gradients(elem)
###Output
_____no_output_____
###Markdown
Black-Box Model Training **NARM model trained on ML-1M dataset.** Given a user sequence 𝒙 with length 𝑇 , we use $𝒙_{[:𝑇−2]}$ as training data and use the last two items for validation and testing respectively. We use hyper-parameters from grid-search. Additionally, all models are trained using Adam optimizer with weight decay 0.01, learning rate 0.001, batch size 128 and 100 linear warmup steps, allowed sequence length as 200. We accelerate evaluation by uniformly sampling 100 negative items for each user. Then we rank them with the positive item and report the average performance on these 101 testing items. Our Evaluation focuses on two aspects:- Ranking Performance: We to use truncated Recall@K that is equivalent to Hit Rate (HR@K) in our evaluation, and Normalized Discounted Cumulative Gain (NDCG@K) to measure the ranking quality.- Agreement Measure: We define Agreement@K (Agr@K) to evaluate the output similarity between the black-box model and our extracted white-box model. Official results:
###Code
!python train.py
# !zip -r bb_model_narm_ml1m.zip ./experiments
# !cp bb_model_narm_ml1m.zip /content/drive/MyDrive/TempData
# !ls /content/drive/MyDrive/TempData
###Output
_____no_output_____
###Markdown
White-Box Model Distillation
###Code
!cp /content/drive/MyDrive/TempData/bb_model_narm_ml1m.zip .
!unzip bb_model_narm_ml1m.zip
!python distill.py
!zip -r wb_model_narm_ml1m.zip ./experiments
!cp wb_model_narm_ml1m.zip /content/drive/MyDrive/TempData
!ls /content/drive/MyDrive/TempData
###Output
_____no_output_____
###Markdown
Attack
###Code
!python attack.py
!zip -r wb_model_narm_ml1m.zip ./experiments
!cp wb_model_narm_ml1m.zip /content/drive/MyDrive/TempData
!ls /content/drive/MyDrive/TempData
###Output
_____no_output_____
###Markdown
Retrain
###Code
!python retrain.py
###Output
Input 1 / 20 for movielens, b for beauty, bd for dense beauty, g for games, s for steam and y for yoochoose: 1
Input model code, b for BERT, s for SASRec and n for NARM: n
Input GPU ID: 0
Already preprocessed. Skip preprocessing
Negatives samples exist. Loading.
Negatives samples exist. Loading.
Input white box model code, b for BERT, s for SASRec and n for NARM: n
{1: 'narm2narm_autoregressive4', 2: 'narm_black_box'}
Input index of desired white box model: 1
Already preprocessed. Skip preprocessing
## Generate Biased Data with Target [2459, 1009, 2135, 918, 3233, 1226, 498, 2917, 1332, 3184, 264, 2490, 1696, 1448, 144, 365, 1368, 2714, 1874, 3285, 2235, 3406, 3155, 1322, 2928] ##
Generating poisoned dataset...
100% 2/2 [00:04<00:00, 2.24s/it]
Already preprocessed. Skip preprocessing
Negative samples don't exist. Generating.
Sampling negative items randomly...
100% 6100/6100 [00:01<00:00, 6058.91it/s]
Negative samples don't exist. Generating.
Sampling negative items randomly...
100% 6100/6100 [00:01<00:00, 5976.19it/s]
## Biased Retrain on Item [2459, 1009, 2135, 918, 3233, 1226, 498, 2917, 1332, 3184, 264, 2490, 1696, 1448, 144, 365, 1368, 2714, 1874, 3285, 2235, 3406, 3155, 1322, 2928] ##
Epoch 1, loss 5.217 : 100% 7761/7761 [11:15<00:00, 11.49it/s]
Eval: N@1 0.474, N@5 0.629, N@10 0.657, R@1 0.474, R@5 0.760, R@10 0.847: 100% 48/48 [00:01<00:00, 29.43it/s]
Update Best NDCG@10 Model at 1
Epoch 2, loss 5.196 : 100% 7761/7761 [11:15<00:00, 11.48it/s]
Eval: N@1 0.477, N@5 0.632, N@10 0.658, R@1 0.477, R@5 0.764, R@10 0.845: 100% 48/48 [00:01<00:00, 29.73it/s]
Update Best NDCG@10 Model at 2
Epoch 3, loss 5.186 : 100% 7761/7761 [11:15<00:00, 11.48it/s]
Eval: N@1 0.476, N@5 0.634, N@10 0.661, R@1 0.476, R@5 0.766, R@10 0.847: 100% 48/48 [00:01<00:00, 29.53it/s]
Update Best NDCG@10 Model at 3
Epoch 4, loss 5.178 : 100% 7761/7761 [11:16<00:00, 11.47it/s]
Eval: N@1 0.474, N@5 0.632, N@10 0.659, R@1 0.474, R@5 0.763, R@10 0.846: 100% 48/48 [00:01<00:00, 29.71it/s]
Epoch 5, loss 5.171 : 100% 7761/7761 [11:16<00:00, 11.47it/s]
Eval: N@1 0.468, N@5 0.629, N@10 0.655, R@1 0.468, R@5 0.764, R@10 0.845: 100% 48/48 [00:01<00:00, 29.76it/s]
Epoch 6, loss 5.165 : 100% 7761/7761 [11:17<00:00, 11.45it/s]
Eval: N@1 0.475, N@5 0.635, N@10 0.660, R@1 0.475, R@5 0.769, R@10 0.848: 100% 48/48 [00:01<00:00, 29.76it/s]
Epoch 7, loss 5.139 : 37% 2903/7761 [04:13<07:03, 11.46it/s] |
src/JSON-to-CSV.ipynb | ###Markdown
Import all necessary libraries
###Code
import os
import re
from pyarrow import json
import pyarrow.parquet as pq
###Output
_____no_output_____
###Markdown
Define our global variables.`TEMPORAL_DIR` is the temporary landing zone where raw files will be placed that need to be processed`PERSISTANT_DIR` will be the location of files converted to the selected file format
###Code
TEMPORAL_DIR = '../data/raw'
PERSISTENT_DIR = '../data/processed'
###Output
_____no_output_____
###Markdown
Here we create a simple function that will convert a JSON file into a parquet file, and place the converted file into the appropriate location
###Code
def convert_json_to_parquet(input_filename, input_dir, output_filename, output_dir):
'''
This function will take an input file in the form of JSON from a given directory,
convert the file to a parquet, and place the file in a directory specified in parameters.
:param input_filename: filename (including extension) that will be converted into parquet file
:param input_dir: directory where the JSON file exists
:param output_dir: directory where the parquet file should be placed after conversion
:param output_filename: filename that will be given to converted parquet file
:return: None
'''
table = json.read_json(f'{input_dir}/{input_filename}')
pq.write_table(table, f'{output_dir}/{output_filename}')
###Output
_____no_output_____
###Markdown
First we can strip primary metadata information from the filename as received from the website.
###Code
for filename in os.listdir(TEMPORAL_DIR): # iterate over all files in directory DIR
if not filename.startswith('.'): # do not process hidden files that start with "."
metadata = re.split('[-.]',filename) # splits the filename on '-' and '.' -> creates a list
file_directory = f"{PERSISTENT_DIR}/{metadata[0]}/{metadata[1]}" # uses YYYY/MM as the name of the sub-directory
new_filename = f"{metadata[3]}-{metadata[4]}" # new file name will be userID-taskID
if not os.path.exists(file_directory): # creates the directory if it doesn't exist
os.makedirs(file_directory)
if metadata[5] == "json":
convert_json_to_parquet(filename, TEMPORAL_DIR, new_filename, file_directory)
elif metadata[5] == "csv":
print("This is where Vlada's function will be placed") # TODO: Replace with Vlada's function to convert from CSV to parquet
###Output
_____no_output_____ |
Jupyter Notebook/Jupyter Notebok/Mutation - Statement/math.ipynb | ###Markdown
Difference - Classes not covered in jacoco or PIT
###Code
df = merged_inner
df.columns
merged_inner.head()
merged_inner.count()
df.plot(x='Mutation_Score', y='Statement_Percentage', style='o')
df[['Mutation_Score','Statement_Percentage']].corr(method ='spearman')
df.plot(x='Mutation_Score', y='Branch_Percentage', style='o')
df[['Mutation_Score','Branch_Percentage']].corr(method ='spearman')
df.to_csv('math-mu-st-branch.csv')
from google.colab import files
files.download("math-mu-st-branch.csv")
###Output
_____no_output_____ |
CRE_Marketing_Data/HotelTaxPayerData.ipynb | ###Markdown
Grabing Public Hotel Occupancy Tax Data, then storing it into a database, crossreferencing if data is repeating Prerequisites: requirements for mysql-python communication:* pip install mysqlclient* pip install mysql-connector-python * if recieving wheel error: pip install wheel
###Code
# Imports
import time
import sys
from zipfile import ZipFile
import pandas as pd
import pandas.io.sql as pdsql
import glob, os
import numpy as np
# Datetime for new column
import datetime
# Imports for mySQL
from sqlalchemy import create_engine, event, DateTime
from db_setup import mysql_user, mysql_password, db_name
import mysql.connector
###Output
_____no_output_____
###Markdown
File path defined
###Code
mydir = os.path.abspath('./HotelOccupancyTaxData')
mydir
###Output
_____no_output_____
###Markdown
Defining headers for data
###Code
# Defining header for marketing data. Marketing data comes with no header
# Franchise tax permit
ftact_date_head = ['Taxpayer_Number',
'Taxpayer_Name',
'Taxpayer_Address',
'Taxpayer_City',
'Taxpayer_State',
'Taxpayer_Zip_Code',
'Taxpayer_County_Code',
'Taxpayer_Organizational_Type',
'Taxpayer_Phone_Number',
'Record_Type_Code',
'Responsibility_Beginning_Date',
'Secretary_of_State_File_Number',
'SOS_Charter_Date',
'SOS_Status_Date',
'Current_Exempt_Reason_Code',
'Agent_Name',
'Agent_Address',
'Agent_City',
'Agent_State',
'Agent_Zip_Code']
# Franchise tax permit date
ftact_head = ['Taxpayer_Number',
'Taxpayer_Name',
'Taxpayer_Address',
'Taxpayer_City',
'Taxpayer_State',
'Taxpayer_Zip_Code',
'Taxpayer_County_Code',
'Taxpayer_Organizational_Type',
'Taxpayer_Phone_Number',
'Record_Type_Code',
'Responsibility_Beginning_Date',
'Responsibility_End_Date',
'Responsibility_End_Reason_Code',
'Secretary_of_State_File_Number',
'SOS_Charter_Date',
'SOS_Status_Date',
'SOS_Status_Code',
'Rigth_to_Tansact_Business_Code',
'Current_Exempt_Reason_Code',
'Exempt_Begin_Date',
'NAICS_Code']
###Output
_____no_output_____
###Markdown
Extract files from zipped folder
###Code
# extract all files
i = 0
for file in glob.glob(mydir + '/*.zip'):
i += 1
zip = ZipFile(file, 'r')
print(f'Extracting file {i}')
zip.extractall(mydir)
zip.close()
print('Done!')
print(f"File {i}, extracted: {file}\n")
time.sleep(1)
os.remove(file)
###Output
Extracting file 1
Done!
File 1, extracted: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\FTACT.zip
Extracting file 2
Done!
File 2, extracted: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\PP_files.zip
Extracting file 3
Done!
File 3, extracted: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\Real_building_land.zip
Extracting file 4
Done!
File 4, extracted: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\STACT.zip
###Markdown
Add csv files to a data frame ( fran and stp)
###Code
# Searches for a csv file
df_fran = pd.DataFrame()
for file in glob.glob(mydir + '/*.csv'):
if 'fran' in file:
df = pd.read_csv(file, header=None, index_col=False, names=ftact_date_head, engine ='python')
df_fran = df_fran.append(df)
os.remove(file)
print('Added the ' + file + " into the DF df_fran")
print("deleted the file " + str(file))
else:
print('we do not know what to do with this file: ' + str(file))
###Output
_____no_output_____
###Markdown
FRAN DF created
###Code
df_fran.head()
###Output
_____no_output_____
###Markdown
Adding the Taxpayer County Name and Record Type Name Column
###Code
# Taxpayer Organization Type:
df_fran.loc[(df_fran.Taxpayer_Organizational_Type == 'CF'),'Taxpayer_Organizational_Name']='Foreign Profit' # CF - Foreign Profit
df_fran.loc[(df_fran.Taxpayer_Organizational_Type == 'CI'),'Taxpayer_Organizational_Name']='Limited Liability Company - Foreign'# CI - Limited Liability Company - Foreign
df_fran.loc[(df_fran.Taxpayer_Organizational_Type == 'CL'),'Taxpayer_Organizational_Name']='Limited Liability Company - Texas' # CL - Limited Liability Company - Texas
df_fran.loc[(df_fran.Taxpayer_Organizational_Type == 'CM'),'Taxpayer_Organizational_Name']='Foreign Non-Profit' # CM - Foreign Non-Profit
df_fran.loc[(df_fran.Taxpayer_Organizational_Type == 'CN'),'Taxpayer_Organizational_Name']='Texas Non-Profit' # CN - Texas Non-Profit
df_fran.loc[(df_fran.Taxpayer_Organizational_Type == 'CP'),'Taxpayer_Organizational_Name']='Professional' # CP - Professional
df_fran.loc[(df_fran.Taxpayer_Organizational_Type == 'CR'),'Taxpayer_Organizational_Name']='Texas Insurance' # CR - Texas Insurance
df_fran.loc[(df_fran.Taxpayer_Organizational_Type == 'CS'),'Taxpayer_Organizational_Name']='Foreign Insurance - OOS' # CS - Foreign Insurance - OOS
df_fran.loc[(df_fran.Taxpayer_Organizational_Type == 'CT'),'Taxpayer_Organizational_Name']='Texas Profit' # CT - Texas Profit
df_fran.loc[(df_fran.Taxpayer_Organizational_Type == 'CW'),'Taxpayer_Organizational_Name']='Texas Railroad Corporation' # CW - Texas Railroad Corporation
df_fran.loc[(df_fran.Taxpayer_Organizational_Type == 'CX'),'Taxpayer_Organizational_Name']='Foreign Railroad Corporation - OOS' # CX - Foreign Railroad Corporation - OOS
# Record Type Code:
df_fran.loc[(df_fran.Record_Type_Code == 'U'),'Record_Type_Name']='Secretary of State (SOS) File Number' # U = Secretary of State (SOS) File Number
df_fran.loc[(df_fran.Record_Type_Code == 'V'),'Record_Type_Name']='SOS Certificate of Authority (COA) File Number' # V = SOS Certificate of Authority (COA) File Number
df_fran.loc[(df_fran.Record_Type_Code == 'X'),'Record_Type_Name']='Comptroller Assigned File Number' # X = Comptroller Assigned File Number
df_fran.head()
###Output
_____no_output_____
###Markdown
Date format
###Code
# df_fran['SOS_Charter_Date'] = df_fran['SOS_Charter_Date'].str.strip()
df_fran['SOS_Charter_Date'] = df_fran['SOS_Charter_Date'].fillna(0)
df_fran['SOS_Status_Date'] = df_fran['SOS_Status_Date'].fillna(0)
# df_fran['SOS_Charter_Date'] = df_fran['SOS_Charter_Date'].astype(np.int64)
# df_fran['SOS_Status_Date'] = df_fran['SOS_Status_Date'].astype(np.int64)
df_fran['Responsibility_Beginning_Date'] = df_fran['Responsibility_Beginning_Date'].astype(np.int64)
df_fran['SOS_Charter_Date'] = pd.to_datetime(df_fran["SOS_Charter_Date"], format='%Y%m%d', errors='coerce')
df_fran['SOS_Status_Date'] = pd.to_datetime(df_fran["SOS_Status_Date"], format='%Y%m%d', errors='coerce')
df_fran['Responsibility_Beginning_Date'] = pd.to_datetime(df_fran["Responsibility_Beginning_Date"], format='%Y%m%d', errors='coerce')
df_fran['SOS_Charter_Date'] =df_fran['SOS_Charter_Date'].dt.normalize()
df_fran['SOS_Status_Date'] = df_fran['SOS_Status_Date'].dt.normalize()
df_fran['Responsibility_Beginning_Date'] = df_fran['Responsibility_Beginning_Date'].dt.normalize()
df_fran = df_fran[df_fran['Taxpayer_Zip_Code']!=0]
df_fran.head()
###Output
_____no_output_____
###Markdown
Checking column count
###Code
df_fran.count()
###Output
_____no_output_____
###Markdown
Extracting textfile and storing into DF (FTOFFDIR, FTACT, STACT)
###Code
for file in glob.glob(mydir + '/*.txt'):
if 'FTACT' in file:
df_ftact = pd.read_fwf(file,
widths=[11, 50, 40, 20, 2, 5, 3, 2, 10, 1, 8, 8, 2, 10, 8, 8, 2, 1, 3, 8, 6],
header=None,
names=ftact_head, index_col=False, engine= 'python') # FTOOB, FTACT
df_ftact = df_ftact.append(df_ftact)
os.remove(file)
print('Added the ' + file + ' into df_ftact')
print('deleted the file ' + str(file))
else:
os.remove(file)
print('File not being used: ' + str(file))
###Output
File not being used: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\building_other.txt
File not being used: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\building_res.txt
File not being used: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\exterior.txt
File not being used: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\extra_features.txt
File not being used: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\extra_features_detail1.txt
File not being used: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\extra_features_detail2.txt
File not being used: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\fixtures.txt
Added the C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\FTACT.txt into df_ftact
deleted the file C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\FTACT.txt
File not being used: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\land.txt
File not being used: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\land_ag.txt
File not being used: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\STACT Layout.txt
Added the C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\STACT.txt into df_stact
deleted the file C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\STACT.txt
File not being used: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\structural_elem1.txt
File not being used: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\structural_elem2.txt
File not being used: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\t_business_acct.txt
File not being used: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\t_business_detail.txt
File not being used: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\t_jur_exempt.txt
File not being used: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\t_jur_tax_dist_exempt_value.txt
File not being used: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\t_jur_tax_dist_percent_rate.txt
File not being used: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\t_jur_value.txt
File not being used: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\t_pp_c.txt
File not being used: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\t_pp_e.txt
File not being used: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\t_pp_l.txt
###Markdown
FTACT DF created
###Code
df_ftact.head()
###Output
_____no_output_____
###Markdown
Taxpayer_Organizational_Name and Record_Type_Name Column
###Code
# Taxpayer Organization Type:
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'AB'),'Taxpayer_Organizational_Name']='Texas Business Association' # AB – Texas Business Association
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'AC'),'Taxpayer_Organizational_Name']='Foreign Business Association' # AC – Foreign Business Association
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'AF'),'Taxpayer_Organizational_Name']='Foreign Professional Association' # AF – Foreign Professional Association
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'AP'),'Taxpayer_Organizational_Name']='Texas Professional Association' # AP – Texas Professional Association
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'AR'),'Taxpayer_Organizational_Name']='Other Association' # AR – Other Association
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'CF'),'Taxpayer_Organizational_Name']='Foreign Profit' # CF - Foreign Profit
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'CI'),'Taxpayer_Organizational_Name']='Limited Liability Company - Foreign' # CI - Limited Liability Company - Foreign
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'CL'),'Taxpayer_Organizational_Name']='Limited Liability Company - Texas' # CL - Limited Liability Company - Texas
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'CM'),'Taxpayer_Organizational_Name']='Foreign Non-Profit' # CM - Foreign Non-Profit
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'CN'),'Taxpayer_Organizational_Name']='Texas Non-Profit' # CN - Texas Non-Profit
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'CP'),'Taxpayer_Organizational_Name']='Professional' # CP - Professional
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'CR'),'Taxpayer_Organizational_Name']='Texas Insurance' # CR - Texas Insurance
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'CS'),'Taxpayer_Organizational_Name']='Foreign Insurance - OOS' # CS - Foreign Insurance - OOS
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'CT'),'Taxpayer_Organizational_Name']='Texas Profit' # CT - Texas Profit
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'CU'),'Taxpayer_Organizational_Name']='Foreign Professional Corporation' # CU – Foreign Professional Corporation
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'CW'),'Taxpayer_Organizational_Name']='Texas Railroad Corporation' # CW - Texas Railroad Corporation
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'CX'),'Taxpayer_Organizational_Name']='Foreign Railroad Corporation - OOS' # CX - Foreign Railroad Corporation – OOS
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'HF'),'Taxpayer_Organizational_Name']='Foreign Holding Company' # HF – Foreign Holding Company
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'PB'),'Taxpayer_Organizational_Name']='Business General Partnership' # PB – Business General Partnership
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'PF'),'Taxpayer_Organizational_Name']='Foreign Limited Partnership' # PF – Foreign Limited Partnership
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'PI'),'Taxpayer_Organizational_Name']='Individual General Partnership' # PI – Individual General Partnership
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'PL'),'Taxpayer_Organizational_Name']='Texas Limited Partnership' # PL – Texas Limited Partnership
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'PV'),'Taxpayer_Organizational_Name']='Texas Joint Venture' # PV – Texas Joint Venture
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'PW'),'Taxpayer_Organizational_Name']='Foreign Joint Venture' # PW – Foreign Joint Venture
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'PX'),'Taxpayer_Organizational_Name']='Texas Limited Liability Partnership' # PX – Texas Limited Liability Partnership
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'PY'),'Taxpayer_Organizational_Name']='Foreign Limited Liability Partnerhsip' # PY – Foreign Limited Liability Partnerhsip
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'SF'),'Taxpayer_Organizational_Name']='Foreign Joint Stock Company' # SF – Foreign Joint Stock Company
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'ST'),'Taxpayer_Organizational_Name']='Texas Joint Stock Company' # ST – Texas Joint Stock Company
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'TF'),'Taxpayer_Organizational_Name']='Foreign Business Trust' # TF – Foreign Business Trust
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'TH'),'Taxpayer_Organizational_Name']='Texas Real Estate Investment Trust' # TH – Texas Real Estate Investment Trust
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'TI'),'Taxpayer_Organizational_Name']='Foreign Real Estate Investment Trust' # TI – Foreign Real Estate Investment Trust
df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'TR'),'Taxpayer_Organizational_Name']='Texas Business Trust' # TR – Texas Business Trust
# Record Type Code:
df_ftact.loc[(df_ftact.Record_Type_Code == 'U'),'Record_Type_Name']='Secretary of State (SOS) File Number' # U = Secretary of State (SOS) File Number
df_ftact.loc[(df_ftact.Record_Type_Code == 'V'),'Record_Type_Name']='SOS Certificate of Authority (COA) File Number' # V = SOS Certificate of Authority (COA) File Number
df_ftact.loc[(df_ftact.Record_Type_Code == 'X'),'Record_Type_Name']='Comptroller Assigned File Number' # X = Comptroller Assigned File Number
df_ftact.head()
# (Description for context) SOS Charter/COA:
# Depending on the Record Type Code value, this number
# is the SOS, COA or Comptroller Assigned File Number.
# If the Record Type Code is an 'X', this field will be
# blank. They do not have a current SOS Charter/COA.
###Output
_____no_output_____
###Markdown
Responsibility_End_Reason_Name column
###Code
# Responsibility End Reason Code:
# This is for mostly for Record Type Code value 'X'.
df_ftact.loc[(df_ftact.Responsibility_End_Reason_Code == 0),'Responsibility_End_Reason_Name']='Active or Inactive with no Reason Code' # 00 = Active or Inactive with no Reason Code
df_ftact.loc[(df_ftact.Responsibility_End_Reason_Code == 1),'Responsibility_End_Reason_Name']='Discountinued Doing Business' # 01 = Discountinued Doing Business
df_ftact.loc[(df_ftact.Responsibility_End_Reason_Code == 2),'Responsibility_End_Reason_Name']='Dissolved in Home State' # 02 = Dissolved in Home State
df_ftact.loc[(df_ftact.Responsibility_End_Reason_Code == 3),'Responsibility_End_Reason_Name']='Merged Out of Existence' # 03 = Merged Out of Existence
df_ftact.loc[(df_ftact.Responsibility_End_Reason_Code == 4),'Responsibility_End_Reason_Name']='Converted' # 04 = Converted
df_ftact.loc[(df_ftact.Responsibility_End_Reason_Code == 5),'Responsibility_End_Reason_Name']='Consolidated' # 05 = Consolidated
df_ftact.loc[(df_ftact.Responsibility_End_Reason_Code == 6),'Responsibility_End_Reason_Name']='Forfeited in Home State' # 06 = Forfeited in Home State
df_ftact.loc[(df_ftact.Responsibility_End_Reason_Code == 8),'Responsibility_End_Reason_Name']='No Nexus' # 08 = No Nexus
df_ftact.loc[(df_ftact.Responsibility_End_Reason_Code == 9),'Responsibility_End_Reason_Name']='No Nexus – Dates not the same' # 09 = No Nexus – Dates not the same
df_ftact.loc[(df_ftact.Responsibility_End_Reason_Code == 11),'Responsibility_End_Reason_Name']='Special Information Report' # 11 = Special Information Report
df_ftact.head()
###Output
_____no_output_____
###Markdown
SOS_Status_Name Column
###Code
# (Context description) SOS Charter/COA:
# Depending on the Record Type Code value, this number
# is the SOS, COA or Comptroller Assigned File Number.
# If the Record Type Code is an 'X', this field will be
# blank. They do not have a current SOS Charter/COA.
# SOS Status Code:
# For Charter/COA Numbers:
df_ftact.loc[(df_ftact.SOS_Status_Code == 'A'),'SOS_Status_Name']='Active' # A = Active
df_ftact.loc[(df_ftact.SOS_Status_Code == 'B'),'SOS_Status_Name']='Consolidated' # B = Consolidated
df_ftact.loc[(df_ftact.SOS_Status_Code == 'C'),'SOS_Status_Name']='Converted' # C = Converted
df_ftact.loc[(df_ftact.SOS_Status_Code == 'D'),'SOS_Status_Name']='Dissolved' # D = Dissolved
df_ftact.loc[(df_ftact.SOS_Status_Code == 'E'),'SOS_Status_Name']='Expired' # E = Expired
df_ftact.loc[(df_ftact.SOS_Status_Code == 'F'),'SOS_Status_Name']='Forfeited Franchise Tax' # F = Forfeited Franchise Tax
df_ftact.loc[(df_ftact.SOS_Status_Code == 'G'),'SOS_Status_Name']='Miscellaneous' # G = Miscellaneous
df_ftact.loc[(df_ftact.SOS_Status_Code == 'I'),'SOS_Status_Name']='Closed by FDIC' # I = Closed by FDIC
df_ftact.loc[(df_ftact.SOS_Status_Code == 'J'),'SOS_Status_Name']='State Charter Pulled' # J = State Charter Pulled
df_ftact.loc[(df_ftact.SOS_Status_Code == 'K'),'SOS_Status_Name']='Forfeited Registered Agent' # K = Forfeited Registered Agent
df_ftact.loc[(df_ftact.SOS_Status_Code == 'L'),'SOS_Status_Name']='Forfeited Registered Office' # L = Forfeited Registered Office
df_ftact.loc[(df_ftact.SOS_Status_Code == 'M'),'SOS_Status_Name']='Merger' # M = Merger
df_ftact.loc[(df_ftact.SOS_Status_Code == 'N'),'SOS_Status_Name']='Forfeited Hot Check' # N = Forfeited Hot Check
df_ftact.loc[(df_ftact.SOS_Status_Code == 'P'),'SOS_Status_Name']='Forfeited Court Order' # P = Forfeited Court Order
df_ftact.loc[(df_ftact.SOS_Status_Code == 'R'),'SOS_Status_Name']='Reinstated' # R = Reinstated
df_ftact.loc[(df_ftact.SOS_Status_Code == 'T'),'SOS_Status_Name']='Terminated' # T = Terminated
df_ftact.loc[(df_ftact.SOS_Status_Code == 'W'),'SOS_Status_Name']='Withdrawn' # W = Withdrawn
df_ftact.loc[(df_ftact.SOS_Status_Code == 'Y'),'SOS_Status_Name']='Dead at Conversion 69' # Y = Dead at Conversion 69
df_ftact.loc[(df_ftact.SOS_Status_Code == 'Z'),'SOS_Status_Name']='Dead at Conversion 83' # Z = Dead at Conversion 83
df_ftact.head()
###Output
_____no_output_____
###Markdown
Rigth_to_Tansact_Business_Name Column
###Code
# Exempt Reason Code:
# blank = Not Exempt
# rest = Exempt for various reasons. A list of value descriptions
# may be requested separately.
# Right to Transact Business Code:
# blank = Franchise Tax Ended
df_ftact.loc[(df_ftact.Rigth_to_Tansact_Business_Code == 'A'),'Rigth_to_Tansact_Business_Name']='Active' # A = Active
df_ftact.loc[(df_ftact.Rigth_to_Tansact_Business_Code == 'D'),'Rigth_to_Tansact_Business_Name']='Active – Eligible for Termination/Withdrawl' # D = Active – Eligible for Termination/Withdrawl
df_ftact.loc[(df_ftact.Rigth_to_Tansact_Business_Code == 'N'),'Rigth_to_Tansact_Business_Name']='Forfeited' # N = Forfeited
df_ftact.loc[(df_ftact.Rigth_to_Tansact_Business_Code == 'I'),'Rigth_to_Tansact_Business_Name']='Franchise Tax Involuntarily Ended' # I = Franchise Tax Involuntarily Ended
df_ftact.loc[(df_ftact.Rigth_to_Tansact_Business_Code == 'U'),'Rigth_to_Tansact_Business_Name']='Franchise Tax Not Established' # U = Franchise Tax Not Established
df_ftact.head()
###Output
_____no_output_____
###Markdown
Formating data* changing float to int* adding datetime format
###Code
df_ftact['Taxpayer_Zip_Code'] = df_ftact['Taxpayer_Zip_Code'].fillna(0)
df_ftact['SOS_Charter_Date'] = df_ftact['SOS_Charter_Date'].fillna(0)
df_ftact['SOS_Status_Date'] = df_ftact['SOS_Status_Date'].fillna(0)
df_ftact['Secretary_of_State_File_Number'] = df_ftact['Secretary_of_State_File_Number'].fillna(0)
df_ftact['NAICS_Code'] = df_ftact['NAICS_Code'].fillna(0)
df_ftact['Current_Exempt_Reason_Code'] = df_ftact['Current_Exempt_Reason_Code'].fillna(0)
df_ftact['Taxpayer_Zip_Code'] = df_ftact['Taxpayer_Zip_Code'].astype(np.int64)
df_ftact['SOS_Charter_Date'] = df_ftact['SOS_Charter_Date'].astype(np.int64)
df_ftact['SOS_Status_Date'] = df_ftact['SOS_Status_Date'].astype(np.int64)
df_ftact['Responsibility_Beginning_Date'] = df_ftact['Responsibility_Beginning_Date'].astype(np.int64)
df_ftact['Secretary_of_State_File_Number'] = df_ftact['Secretary_of_State_File_Number'].astype(np.int64)
df_ftact['NAICS_Code'] = df_ftact['NAICS_Code'].astype(np.int64)
df_ftact['Current_Exempt_Reason_Code'] = df_ftact['Current_Exempt_Reason_Code'].astype(np.int64)
df_ftact['SOS_Charter_Date'] = pd.to_datetime(df_ftact["SOS_Charter_Date"], format='%Y%m%d', errors='coerce')
df_ftact['SOS_Status_Date'] = pd.to_datetime(df_ftact["SOS_Status_Date"], format='%Y%m%d', errors='coerce')
df_ftact['Responsibility_Beginning_Date'] = pd.to_datetime(df_ftact["Responsibility_Beginning_Date"], format='%Y%m%d', errors='coerce')
df_ftact['SOS_Charter_Date'] = df_ftact['SOS_Charter_Date'].dt.normalize()
df_ftact['SOS_Status_Date'] = df_ftact['SOS_Status_Date'].dt.normalize()
df_ftact['Responsibility_Beginning_Date'] = df_ftact['Responsibility_Beginning_Date'].dt.normalize()
df_ftact = df_ftact[df_ftact['Taxpayer_Zip_Code']!=0]
df_ftact.head()
###Output
_____no_output_____
###Markdown
Upload DF's to Database* Adding database connection* Defining the Engine** I was getting charmap error when attempting to drop the data to the database. I defined encoding = utf-8, yet it still did not work. Only when I hardcoded charset within the engine string is when the error finally went away.
###Code
connection_string = f"{mysql_user}:{mysql_password}@localhost:3306/{db_name}?charset=utf8"
engine = create_engine(f'mysql://{connection_string}')
engine.table_names()
###Output
_____no_output_____
###Markdown
Creating two variables for today's date and today's datetime
###Code
currentDT = datetime.datetime.now()
DateTimeSent = currentDT.strftime("%Y-%m-%d %H:%M:%S")
dateCSV = currentDT.strftime("%Y-%m-%d")
print(dateCSV)
print(DateTimeSent)
###Output
2020-03-25
2020-03-25 02:13:35
###Markdown
Calling Database tables for crosreferencing df data, to have non-duplicated data* Grabing data from the database and storing the tax number column into a dataframe
###Code
ftact_in_db = pdsql.read_sql("SELECT Taxpayer_Number FROM franchise_tax_info",engine)
print(f"Data count for ftact from the database : {len(ftact_in_db)}\n)
try:
if df_fran.size != 0:
print(f"\nData count from the new df data for df_fran: {len(df_fran)}")
except Exception as e:
print("df_fran does not exist. Check your data source if it is available")
try:
if df_ftact.size != 0:
print(f"Data count from the new df data for df_ftact: {len(df_ftact)}")
except Exception as e:
print("df_ftact does not exist. Check your data source if it is available")
###Output
Data count for ftact from the database : 0
Data count for stact from the Database: 0
Data count for ftoffdir from the Database: 0
Data count from the new df data for df_ftact: 4236082
Data count from the new df data for df_stact: 1554488
df_ftoffdir does not exist. Check your data source if it is available
###Markdown
FTACT aka df_ftact Checking table df with df data to make sure their are not duplicate tax paying numbers* filtering new ftact with data from the database* Checking data for ftact and also adding a new column of today's date and time* Appending new companies (df_ftact) to csv and Database
###Code
try:
df_ftact = df_ftact[~df_ftact['Taxpayer_Number'].astype(int).isin(ftact_in_db['Taxpayer_Number'].astype(int))]
if df_ftact.size != 0:
df_ftact['DateTime'] = DateTimeSent
print(f"There are {len(ftact_in_db)} data attributes in ftact table from the database\n{len(df_ftact)} new companies, based on tax payer number from filtered data df_tact")
df_ftact.to_sql(name='franchise_tax_info', con=engine, if_exists='append', index=False, chunksize=1000)
print(f"ftact to database append, completed")
f = open('HotelOccupancyTaxData/formattedData/DBUploadRecord.txt','a+')
f.write(f'{DateTimeSent}\nftact_{dateCSV}.csv, {len(df_ftact)}, franchise_tax_info table, {len(ftact_in_db)}\n')
f.close()
else:
print("No new data")
f = open('HotelOccupancyTaxData/formattedData/DBUploadRecord.txt','a+')
f.write(f'{DateTimeSent}\nftact_{dateCSV}.csv, {len(df_ftact)}, franchise_tax_info table, {len(ftact_in_db)}\n')
f.close()
except Exception as e:
print(f"Something went wrong, df_ftact was not able to append to database or no new data: {e}")
###Output
There are 0 data attributes in ftact table from the database
4236082 new companies, based on tax payer number from filtered data df_tact
ftact to database append, completed
###Markdown
Call the tables within the database and store into a variable* Going to compare new data from database with the df_fran
###Code
ftact_date_in_db = pdsql.read_sql("SELECT Taxpayer_Number FROM franchise_tax_info_date",engine)
print(f"There are {len(ftact_date_in_db)} records in frachise tax permit date table.\n")
###Output
There are 0 records in frachise tax permit date table.
There are 0 records in sales tax permit date table.
###Markdown
fran aka df_fran Checking table df with df data to make sure their are not duplicate tax paying numbers* filtering new df_fran with data from the database* Checking data for df_fran and also adding a new column of today's date and time* Appending new companies (fran) to csv and Database
###Code
try:
df_fran = df_fran[~df_fran['Taxpayer_Number'].astype(int).isin(ftact_date_in_db['Taxpayer_Number'].astype(int))]
if df_fran.size != 0:
df_fran['DateTime'] = DateTimeSent
print(f"There are {len(ftact_date_in_db)} data attributes in df_fran table from the database\n{len(df_fran)} new companies, based on tax payer number from filtered data df_fran")
df_fran.to_sql(name='franchise_tax_info_date', con=engine, if_exists='append', index=False, chunksize=1000)
print(f"df_fran to database append, completed")
f = open('HotelOccupancyTaxData/formattedData/DBUploadRecord.txt','a+')
f.write(f'fran_{dateCSV}.csv, {len(df_fran)}, franchise_tax_info_date table, {len(ftact_date_in_db)}\n')
f.close()
else:
print("No new data")
f = open('HotelOccupancyTaxData/formattedData/DBUploadRecord.txt','a+')
f.write(f'fran_{dateCSV}.csv, {len(df_fran)}, franchise_tax_info_date table, {len(ftact_date_in_db)}\n')
f.close()
except Exception as e:
print(f"Something went wrong, df_fran was not able to append to database: {e}")
###Output
There are 0 data attributes in df_fran table from the database
111888 new companies, based on tax payer number from filtered data df_fran
df_fran to database append, completed
|
biobb_wf_md_setup_remote/notebooks/biobb_MDsetupRemote_tutorial.ipynb | ###Markdown
Protein MD Setup tutorial using BioExcel Building Blocks (biobb) with remote GROMACS execution**Based on the official GROMACS tutorial:** [http://www.mdtutorials.com/gmx/lysozyme/index.html](http://www.mdtutorials.com/gmx/lysozyme/index.html)***This tutorial aims to illustrate the process of **setting up a simulation system** containing a **protein**, step by step, using the **BioExcel Building Blocks library (biobb)** and connecting remotely to a **super computer** in order to run some jobs. The particular example used is the **Lysozyme** protein (PDB code 1AKI). *** Settings Biobb modules used - [biobb_io](https://github.com/bioexcel/biobb_io): Tools to fetch biomolecular data from public databases. - [biobb_model](https://github.com/bioexcel/biobb_model): Tools to model macromolecular structures. - [biobb_md](https://github.com/bioexcel/biobb_md): Tools to setup and run Molecular Dynamics simulations. - [biobb_analysis](https://github.com/bioexcel/biobb_analysis): Tools to analyse Molecular Dynamics trajectories. - [biobb_remote](https://github.com/bioexcel/biobb_remote): Biobb_remote is a package to allow biobb's to be executed on remote sites through sshs. Auxiliar libraries used - [nb_conda_kernels](https://github.com/Anaconda-Platform/nb_conda_kernels): Enables a Jupyter Notebook or JupyterLab application in one conda environment to access kernels for Python, R, and other languages found in other environments. - [nglview](http://nglviewer.org/nglview): Jupyter/IPython widget to interactively view molecular structures and trajectories in notebooks. - [ipywidgets](https://github.com/jupyter-widgets/ipywidgets): Interactive HTML widgets for Jupyter notebooks and the IPython kernel. - [plotly](https://plot.ly/python/offline/): Python interactive graphing library integrated in Jupyter notebooks. - [simpletraj](https://github.com/arose/simpletraj): Lightweight coordinate-only trajectory reader based on code from GROMACS, MDAnalysis and VMD. Conda Installation and Launch```consolegit clone https://github.com/bioexcel/biobb_wf_md_setup_remote.gitcd biobb_wf_md_setup_remoteconda env create -f conda_env/environment.ymlconda activate biobb_MDsetupRemote_tutorialjupyter-nbextension enable --py --user widgetsnbextensionjupyter-nbextension enable --py --user nglviewjupyter-notebook biobb_wf_md_setup/notebooks/biobb_MDsetupRemote_tutorial.ipynb ``` *** Pipeline steps 1. [Setting up remote access](Setting-up-remote-access) * [Getting new credentials](Generate-SSH-keys-and-store-locally) * Installing credentials on host * Setting host queue 2. [Input Parameters](input) 3. [Fetching PDB Structure](fetch) 4. [Fix Protein Structure](fix) 5. [Create Protein System Topology](top) 6. [Create Solvent Box](box) 7. [Fill the Box with Water Molecules](water) 8. [Adding Ions](ions) 9. [Energetically Minimize the System](min) (local) 10. [Equilibrate the System (NVT)](nvt) (remote) 11. [Equilibrate the System (NPT)](npt) (remote) 12. [Free Molecular Dynamics Simulation](free) (remote) 13. [Post-processing and Visualizing Resulting 3D Trajectory](post) 14. [Output Files](output) 15. [Questions & Comments](questions) ***<img src="https://bioexcel.eu/wp-content/uploads/2019/04/Bioexcell_logo_1080px_transp.png" alt="Bioexcel2 logo" title="Bioexcel2 logo" width="400" />*** Setting up remote accessRemote access uses standard ssh/sftp sessions. A specific public/private key pair will be generated (optional)
###Code
host = 'mn1.bsc.es'
userid = 'bscXXXXX'
host_config_path = '../conf/BSC_MN4.json'
###Output
_____no_output_____
###Markdown
Generate SSH keys and store locallySkip to use user's credentials
###Code
keys_file = '[email protected]'
from biobb_remote.ssh_credentials import SSHCredentials
credentials = SSHCredentials(
host=host, userid=userid, generate_key=True, look_for_keys=False
)
credentials.save(keys_file)
###Output
_____no_output_____
###Markdown
Get generated keys
###Code
credentials.get_private_key()
credentials.get_public_key()
credentials.sftp=None
###Output
_____no_output_____
###Markdown
Public key should to included in .ssh/authorized_keys, either manually or using install_host_auth (requires user's own ssh credentials)
###Code
backup_file_ext = 'bck'
credentials.install_host_auth(backup_file_ext)
###Output
_____no_output_____
###Markdown
Let's recover the keys from the local file. Useful to reuse previous sessions.
###Code
new_credentials = SSHCredentials()
new_credentials.load_from_file(keys_file)
new_credentials.check_host_auth()
###Output
_____no_output_____
###Markdown
Setting the connection to host queueing system (SLURM). local_path is a local working directory, should be created already. remote_path is a base path in the remote computer, will be created when necessary. remote_path will contain a different directory for each instance of the task manager createdtask_data_path keeps a local copy of task manager status allowing to recover interrupted sessions
###Code
#from os.path import join as opj
local_path = 'test_wdir'
remote_path = 'scratch/test_biobb'
task_data_path = 'task_data.json'
# queue settings are bundled in Slurm class according to the options
# on the remote computer
# modules are predefined bundles of HPC modules to be loaded.
queue_settings = 'default'
modules = ['biobb']
conda_env = None
from biobb_remote.slurm import Slurm
##Option 1: Adding Biobb credentials set previously
#slurm = Slurm()
#slurm.set_credentials(credentials)
#slurm.load_host_config(host_config_path)
#slurm.save(task_data_path)
##Option 2: Using user's own credentials
slurm = Slurm(host=host, userid=userid, look_for_keys=True)
slurm.load_host_config(host_config_path)
slurm.save(task_data_path)
#print(slurm.get_queue_info())
# NOT WORKING
###Output
_____no_output_____
###Markdown
Input parameters**Input parameters** needed: - **pdbCode**: PDB code of the protein structure (e.g. 1AKI)
###Code
import nglview
import ipywidgets
pdbCode = "1AKI"
###Output
_____no_output_____
###Markdown
*** Fetching PDB structureDownloading **PDB structure** with the **protein molecule** from the RCSB PDB database.Alternatively, a **PDB file** can be used as starting structure. *****Building Blocks** used: - [Pdb](https://biobb-io.readthedocs.io/en/latest/api.htmlmodule-api.pdb) from **biobb_io.api.pdb*****
###Code
# Downloading desired PDB file
# Import module
from biobb_io.api.pdb import Pdb
# Create properties dict and inputs/outputs
downloaded_pdb = opj(local_path, pdbCode+'.pdb')
prop = {
'pdb_code': pdbCode
}
#Create and launch bb
Pdb(output_pdb_path=downloaded_pdb,
properties=prop).launch()
###Output
_____no_output_____
###Markdown
Visualizing 3D structureVisualizing the downloaded/given **PDB structure** using **NGL**:
###Code
# Show protein
view = nglview.show_structure_file(downloaded_pdb)
view.add_representation(repr_type='ball+stick', selection='all')
view._remote_call('setSize', target='Widget', args=['','600px'])
view
###Output
_____no_output_____
###Markdown
*** Fix protein structure**Checking** and **fixing** (if needed) the protein structure:- **Modeling** **missing side-chain atoms**, modifying incorrect **amide assignments**, choosing **alternative locations**.- **Checking** for missing **backbone atoms**, **heteroatoms**, **modified residues** and possible **atomic clashes**.*****Building Blocks** used: - [FixSideChain](https://biobb-model.readthedocs.io/en/latest/model.htmlmodule-model.fix_side_chain) from **biobb_model.model.fix_side_chain*****
###Code
# Check & Fix PDB
# Import module
from biobb_model.model.fix_side_chain import FixSideChain
# Create prop dict and inputs/outputs
fixed_pdb = opj(local_path, pdbCode + '_fixed.pdb')
# Create and launch bb
FixSideChain(input_pdb_path=downloaded_pdb,
output_pdb_path=fixed_pdb).launch()
###Output
_____no_output_____
###Markdown
Visualizing 3D structureVisualizing the fixed **PDB structure** using **NGL**. In this particular example, the checking step didn't find any issue to be solved, so there is no difference between the original structure and the fixed one.
###Code
# Show protein
view = nglview.show_structure_file(fixed_pdb)
view.add_representation(repr_type='ball+stick', selection='all')
view._remote_call('setSize', target='Widget', args=['','600px'])
view.camera='orthographic'
view
###Output
_____no_output_____
###Markdown
*** Create protein system topology**Building GROMACS topology** corresponding to the protein structure.Force field used in this tutorial is [**amber99sb-ildn**](https://dx.doi.org/10.1002%2Fprot.22711): AMBER **parm99** force field with **corrections on backbone** (sb) and **side-chain torsion potentials** (ildn). Water molecules type used in this tutorial is [**spc/e**](https://pubs.acs.org/doi/abs/10.1021/j100308a038).Adding **hydrogen atoms** if missing. Automatically identifying **disulfide bridges**. Generating two output files: - **GROMACS structure** (gro file)- **GROMACS topology** ZIP compressed file containing: - *GROMACS topology top file* (top file) - *GROMACS position restraint file/s* (itp file/s)*****Building Blocks** used: - [Pdb2gmx](https://biobb-md.readthedocs.io/en/latest/gromacs.htmlmodule-gromacs.pdb2gmx) from **biobb_md.gromacs.pdb2gmx*****
###Code
# Create system topology
# Import module
from biobb_md.gromacs.pdb2gmx import Pdb2gmx
# Create inputs/outputs
output_pdb2gmx_gro = opj(local_path, pdbCode+'_pdb2gmx.gro')
output_pdb2gmx_top_zip = opj(local_path, pdbCode+'_pdb2gmx_top.zip')
# Create and launch bb
Pdb2gmx(input_pdb_path=fixed_pdb,
output_gro_path=output_pdb2gmx_gro,
output_top_zip_path=output_pdb2gmx_top_zip).launch()
###Output
_____no_output_____
###Markdown
Visualizing 3D structureVisualizing the generated **GRO structure** using **NGL**. Note that **hydrogen atoms** were added to the structure by the **pdb2gmx GROMACS tool** when generating the **topology**.
###Code
# Show protein
view = nglview.show_structure_file(output_pdb2gmx_gro)
view.add_representation(repr_type='ball+stick', selection='all')
view._remote_call('setSize', target='Widget', args=['','600px'])
view.camera='orthographic'
view
###Output
_____no_output_____
###Markdown
*** Create solvent boxDefine the unit cell for the **protein structure MD system** to fill it with water molecules.A **cubic box** is used to define the unit cell, with a **distance from the protein to the box edge of 1.0 nm**. The protein is **centered in the box**. *****Building Blocks** used: - [Editconf](https://biobb-md.readthedocs.io/en/latest/gromacs.htmlmodule-gromacs.editconf) from **biobb_md.gromacs.editconf** ***
###Code
# Editconf: Create solvent box
# Import module
from biobb_md.gromacs.editconf import Editconf
# Create prop dict and inputs/outputs
output_editconf_gro = opj(local_path, pdbCode+'_editconf.gro')
prop = {
'box_type': 'cubic',
'distance_to_molecule': 1.0
}
#Create and launch bb
Editconf(input_gro_path=output_pdb2gmx_gro,
output_gro_path=output_editconf_gro,
properties=prop).launch()
###Output
_____no_output_____
###Markdown
*** Fill the box with water moleculesFill the unit cell for the **protein structure system** with water molecules.The solvent type used is the default **Simple Point Charge water (SPC)**, a generic equilibrated 3-point solvent model. *****Building Blocks** used: - [Solvate](https://biobb-md.readthedocs.io/en/latest/gromacs.htmlmodule-gromacs.solvate) from **biobb_md.gromacs.solvate** ***
###Code
# Solvate: Fill the box with water molecules
from biobb_md.gromacs.solvate import Solvate
# Create prop dict and inputs/outputs
output_solvate_gro = opj(local_path, pdbCode+'_solvate.gro')
output_solvate_top_zip = opj(local_path, pdbCode+'_solvate_top.zip')
# Create and launch bb
Solvate(input_solute_gro_path=output_editconf_gro,
output_gro_path=output_solvate_gro,
input_top_zip_path=output_pdb2gmx_top_zip,
output_top_zip_path=output_solvate_top_zip).launch()
###Output
_____no_output_____
###Markdown
Visualizing 3D structureVisualizing the **protein system** with the newly added **solvent box** using **NGL**. Note the **cubic box** filled with **water molecules** surrounding the **protein structure**, which is **centered** right in the middle of the cube.
###Code
# Show protein
view = nglview.show_structure_file(output_solvate_gro)
view.clear_representations()
view.add_representation(repr_type='cartoon', selection='solute', color='green')
view.add_representation(repr_type='ball+stick', selection='SOL')
view._remote_call('setSize', target='Widget', args=['','600px'])
view.camera='orthographic'
view
###Output
_____no_output_____
###Markdown
*** Adding ionsAdd ions to neutralize the **protein structure** charge- [Step 1](ionsStep1): Creating portable binary run file for ion generation- [Step 2](ionsStep2): Adding ions to **neutralize** the system*****Building Blocks** used: - [Grompp](https://biobb-md.readthedocs.io/en/latest/gromacs.htmlmodule-gromacs.grompp) from **biobb_md.gromacs.grompp** - [Genion](https://biobb-md.readthedocs.io/en/latest/gromacs.htmlmodule-gromacs.genion) from **biobb_md.gromacs.genion** *** Step 1: Creating portable binary run file for ion generationA simple **energy minimization** molecular dynamics parameters (mdp) properties will be used to generate the portable binary run file for **ion generation**, although **any legitimate combination of parameters** could be used in this step.
###Code
# Grompp: Creating portable binary run file for ion generation
from biobb_md.gromacs.grompp import Grompp
# Create prop dict and inputs/outputs
output_gppion_tpr = opj(local_path, pdbCode+'_gppion.tpr')
prop = {
'simulation_type':'minimization'
}
# Create and launch bb
Grompp(input_gro_path=output_solvate_gro,
input_top_zip_path=output_solvate_top_zip,
output_tpr_path=output_gppion_tpr,
properties=prop).launch()
###Output
_____no_output_____
###Markdown
Step 2: Adding ions to neutralize the systemReplace **solvent molecules** with **ions** to **neutralize** the system.
###Code
# Genion: Adding ions to neutralize the system
from biobb_md.gromacs.genion import Genion
# Create prop dict and inputs/outputs
output_genion_gro = opj(local_path, pdbCode+'_genion.gro')
output_genion_top_zip = opj(local_path, pdbCode+'_genion_top.zip')
prop={
'neutral':True
}
# Create and launch bb
Genion(input_tpr_path=output_gppion_tpr,
output_gro_path=output_genion_gro,
input_top_zip_path=output_solvate_top_zip,
output_top_zip_path=output_genion_top_zip,
properties=prop).launch()
###Output
_____no_output_____
###Markdown
Visualizing 3D structureVisualizing the **neutralized protein system** with the newly added **ions** using **NGL**
###Code
# Show protein
view = nglview.show_structure_file(output_genion_gro)
view.clear_representations()
view.add_representation(repr_type='cartoon', selection='solute', color='sstruc')
view.add_representation(repr_type='ball+stick', selection='NA')
view.add_representation(repr_type='ball+stick', selection='CL')
view._remote_call('setSize', target='Widget', args=['','600px'])
view.camera='orthographic'
view
###Output
_____no_output_____
###Markdown
*** Energetically minimize the systemEnergetically minimize the **protein system** till reaching a desired potential energy.- [Step 1](emStep1): Creating portable binary run file for energy minimization- [Step 2](emStep2): Energetically minimize the **system** till reaching a force of 500 kJ mol-1 nm-1.- [Step 3](emStep3): Checking **energy minimization** results. Plotting energy by time during the **minimization** process.*****Building Blocks** used: - [Grompp](https://biobb-md.readthedocs.io/en/latest/gromacs.htmlmodule-gromacs.grompp) from **biobb_md.gromacs.grompp** - [Mdrun](https://biobb-md.readthedocs.io/en/latest/gromacs.htmlmodule-gromacs.mdrun) from **biobb_md.gromacs.mdrun** - [GMXEnergy](https://biobb-analysis.readthedocs.io/en/latest/gromacs.htmlmodule-gromacs.gmx_energy) from **biobb_analysis.gromacs.gmx_energy** *** Step 1: Creating portable binary run file for energy minimizationThe **minimization** type of the **molecular dynamics parameters (mdp) property** contains the main default parameters to run an **energy minimization**:- integrator = steep ; Algorithm (steep = steepest descent minimization)- emtol = 1000.0 ; Stop minimization when the maximum force < 1000.0 kJ/mol/nm- emstep = 0.01 ; Minimization step size (nm)- nsteps = 50000 ; Maximum number of (minimization) steps to performIn this particular example, the method used to run the **energy minimization** is the default **steepest descent**, but the **maximum force** is placed at **500 KJ/mol\*nm^2**, and the **maximum number of steps** to perform (if the maximum force is not reached) to **5,000 steps**.
###Code
# Grompp: Creating portable binary run file for mdrun
from biobb_md.gromacs.grompp import Grompp
# Create prop dict and inputs/outputs
output_gppmin_tpr = opj(local_path, pdbCode+'_gppmin.tpr')
prop = {
'mdp':{
'emtol':'500',
'nsteps':'5000'
},
'simulation_type': 'minimization'
}
# Create and launch bb
Grompp(input_gro_path=output_genion_gro,
input_top_zip_path=output_genion_top_zip,
output_tpr_path=output_gppmin_tpr,
properties=prop).launch()
###Output
_____no_output_____
###Markdown
Step 2: Running Energy Minimization (remote)Running **energy minimization** using the **tpr file** generated in the previous step. Setting local data and uploading files to remote.
###Code
slurm.set_local_data_bundle(local_path, add_files=False)
slurm.task_data['local_data_bundle'].add_file(output_gppmin_tpr)
slurm.send_input_data(remote_path, overwrite=True)
# slurm.task_data['local_data_bundle'].file_stats['../test/test_wdir/1AKI_gppmin.tpr'].st_size
# NOT WORKING
slurm.load_host_config(host_config_path)
slurm.save(task_data_path)
###Output
_____no_output_____
###Markdown
Loading pre-defined host configuration
###Code
slurm.load_host_config(host_config_path)
slurm.host_config
# Mdrun: Running minimization
python_import = 'from biobb_md.gromacs.mdrun import Mdrun'
# Create prop dict and inputs/outputs
output_min_trr = pdbCode+'_min.trr'
output_min_gro = pdbCode+'_min.gro'
output_min_edr = pdbCode+'_min.edr'
output_min_log = pdbCode+'_min.log'
files = {
'input_tpr_path' : pdbCode + '_gppmin.tpr',
'output_trr_path' : output_min_trr,
'output_gro_path' : output_min_gro,
'output_edr_path' : output_min_edr,
'output_log_path' : output_min_log
}
# properties
# Python dict
prop = {
'gmx_path': 'gmx_mpi'
}
# YAML file
# prop = 'properties_path.yaml'
# Json string
# prop = '{"gmx_path": "gmx_mpi"}'
# Galaxy escaped Json string
# prop = '__oc____dq__gmx_path__dq__:__dq__gmx_mpi__dq____cc__'
# patching queue settings
patch={'qos':'debug', 'nodes':2, 'ntasks': 2, 'ntasks-per-node': 2, 'cpus-per-task': 24, 'time':'2:00:00' }
#patch={'qos':'debug', 't':'24:00:00', 'nodes':2, 'ntasks-per-node': 2, 'cpus-per-task': 48, 'ntasks': 2}
slurm.set_custom_settings(patch=patch)
# get_remote_py_script generates one-line python script appropriate
# for a single biobb execution on a slurm job
# Alternatively, a file containing a more complex script can be loaded from disk
slurm.submit(
queue_settings='custom',
modules=modules,
conda_env=conda_env,
local_run_script=slurm.get_remote_py_script(python_import, files, 'Mdrun', properties=prop)
)
slurm.save(task_data_path)
###Output
_____no_output_____
###Markdown
Task progression is maintained in a local file
###Code
slurm.save(task_data_path)
###Output
_____no_output_____
###Markdown
Waiting for job completion and saving status. Poll time is in seconds.
###Code
slurm.check_job(poll_time=5)
slurm.save(task_data_path)
###Output
_____no_output_____
###Markdown
Getting logs
###Code
#slurm.get_remote_file_stats()
# NOT WORKING
print('\n'.join(slurm.get_logs()))
###Output
_____no_output_____
###Markdown
Recovering output files to local_path
###Code
slurm.get_output_data(overwrite=False)
slurm.task_data['output_data_bundle'].files
###Output
_____no_output_____
###Markdown
Step 3: Checking Energy Minimization resultsChecking **energy minimization** results. Plotting **potential energy** by time during the minimization process.
###Code
# GMXEnergy: Getting system energy by time
from biobb_analysis.gromacs.gmx_energy import GMXEnergy
# Create prop dict and inputs/outputs
output_min_edr = local_path + "/" + pdbCode + "_min.edr"
output_min_ene_xvg = local_path + "/" + pdbCode+'_min_ene.xvg'
prop = {
'terms': ["Potential"]
}
# Create and launch bb
GMXEnergy(input_energy_path=output_min_edr,
output_xvg_path=output_min_ene_xvg,
properties=prop).launch()
import plotly
import plotly.graph_objs as go
#Read data from file and filter energy values higher than 1000 Kj/mol^-1
with open(output_min_ene_xvg,'r') as energy_file:
x,y = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]))
for line in energy_file
if not line.startswith(("#","@"))
if float(line.split()[1]) < 1000
])
)
plotly.offline.init_notebook_mode(connected=True)
fig = {
"data": [go.Scatter(x=x, y=y)],
"layout": go.Layout(title="Energy Minimization",
xaxis=dict(title = "Energy Minimization Step"),
yaxis=dict(title = "Potential Energy KJ/mol-1")
)
}
plotly.offline.iplot(fig)
###Output
_____no_output_____
###Markdown
*** Equilibrate the system (NVT)Equilibrate the **protein system** in **NVT ensemble** (constant Number of particles, Volume and Temperature). Protein **heavy atoms** will be restrained using position restraining forces: movement is permitted, but only after overcoming a substantial energy penalty. The utility of position restraints is that they allow us to equilibrate our solvent around our protein, without the added variable of structural changes in the protein.- [Step 1](eqNVTStep1): Creating portable binary run file for system equilibration- [Step 2](eqNVTStep2): Equilibrate the **protein system** with **NVT** ensemble.- [Step 3](eqNVTStep3): Checking **NVT Equilibration** results. Plotting **system temperature** by time during the **NVT equilibration** process. *****Building Blocks** used:- [Grompp](https://biobb-md.readthedocs.io/en/latest/gromacs.htmlmodule-gromacs.grompp) from **biobb_md.gromacs.grompp** - [Mdrun](https://biobb-md.readthedocs.io/en/latest/gromacs.htmlmodule-gromacs.mdrun) from **biobb_md.gromacs.mdrun** - [GMXEnergy](https://biobb-analysis.readthedocs.io/en/latest/gromacs.htmlmodule-gromacs.gmx_energy) from **biobb_analysis.gromacs.gmx_energy** *** Step 1: Creating portable binary run file for system equilibration (NVT)The **nvt** type of the **molecular dynamics parameters (mdp) property** contains the main default parameters to run an **NVT equilibration** with **protein restraints** (see [GROMACS mdp options](http://manual.gromacs.org/documentation/2018/user-guide/mdp-options.html)):- Define = -DPOSRES- integrator = md- dt = 0.002- nsteps = 5000- pcoupl = no- gen_vel = yes- gen_temp = 300- gen_seed = -1In this particular example, the default parameters will be used: **md** integrator algorithm, a **step size** of **2fs**, **5,000 equilibration steps** with the protein **heavy atoms restrained**, and a temperature of **300K**.*Please note that for the sake of time this tutorial is only running 10ps of NVT equilibration, whereas in the [original example](http://www.mdtutorials.com/gmx/lysozyme/06_equil.html) the simulated time was 100ps.*
###Code
# Grompp: Creating portable binary run file for NVT Equilibration
from biobb_md.gromacs.grompp import Grompp
# Create prop dict and inputs/outputs
input_min_gro = opj(local_path, pdbCode + '_min.gro')
input_genion_top_zip = opj(local_path, pdbCode + '_genion_top.zip')
output_gppnvt_tpr = opj(local_path, pdbCode+'_gppnvt.tpr')
prop = {
'mdp':{
'nsteps': 5000,
'dt': 0.002,
'Define': '-DPOSRES',
#'tc_grps': "DNA Water_and_ions" # NOTE: uncomment this line if working with DNA
},
'simulation_type': 'nvt'
}
# Create and launch bb
Grompp(input_gro_path=input_min_gro,
input_top_zip_path=input_genion_top_zip,
output_tpr_path=output_gppnvt_tpr,
properties=prop).launch()
###Output
_____no_output_____
###Markdown
Uploading new tpr file to remote
###Code
slurm.task_data['local_data_bundle'].add_file(output_gppnvt_tpr)
slurm.send_input_data(remote_path, overwrite= False)
###Output
_____no_output_____
###Markdown
Step 2: Running NVT equilibration (remote) Preparing custom queue settings for Slurm
###Code
"""patch = slurm.prep_auto_settings(nodes=1, cpus_per_task=40)
slurm.set_custom_settings(patch=patch, clean=True)
# Settings changes can be accumulated
patch = {'time':'1:00:00'}
slurm.set_custom_settings(ref_setting='custom', patch=patch)
print(slurm.host_config['qsettings']['custom'])"""
# NOT WORKING
# Mdrun: Running Equilibration NVT
# Mdrun: Running Equilibration NVT
python_import = 'from biobb_md.gromacs.mdrun import Mdrun'
# Create prop dict and inputs/outputs
input_gppnvt_tpr = pdbCode + '_gppnvt.tpr'
output_nvt_trr = pdbCode+'_nvt.trr'
output_nvt_gro = pdbCode+'_nvt.gro'
output_nvt_edr = pdbCode+'_nvt.edr'
output_nvt_log = pdbCode+'_nvt.log'
output_nvt_cpt = pdbCode+'_nvt.cpt'
files = {
'input_tpr_path' : input_gppnvt_tpr,
'output_trr_path' : output_nvt_trr,
'output_gro_path' : output_nvt_gro,
'output_edr_path' : output_nvt_edr,
'output_log_path' : output_nvt_log,
'output_cpt_path' : output_nvt_cpt
}
# get_remote_py_script generates one-line python script appropriate for a
# single biobb execution
# Alternatively, a file containing a more complex script can be loaded
prop={'gmx_path':'gmx_mpi'}
slurm.submit(
'custom',
modules,
slurm.get_remote_py_script(python_import, files, 'Mdrun', properties=prop)
)
slurm.check_job(poll_time=5)
slurm.save(task_data_path)
slurm.get_output_data(overwrite=False)
print('\n'.join(slurm.get_logs()))
###Output
_____no_output_____
###Markdown
Step 3: Checking NVT Equilibration resultsChecking **NVT Equilibration** results. Plotting **system temperature** by time during the NVT equilibration process.
###Code
# GMXEnergy: Getting system temperature by time during NVT Equilibration
from biobb_analysis.gromacs.gmx_energy import GMXEnergy
# Create prop dict and inputs/outputs
input_nvt_edr = local_path + '/' + pdbCode + '_nvt.edr'
output_nvt_temp_xvg = local_path + '/' + pdbCode+'_nvt_temp.xvg'
prop = {
'terms': ["Temperature"]
}
# Create and launch bb
GMXEnergy(input_energy_path=input_nvt_edr,
output_xvg_path=output_nvt_temp_xvg,
properties=prop).launch()
import plotly
import plotly.graph_objs as go
# Read temperature data from file
with open(output_nvt_temp_xvg,'r') as temperature_file:
x,y = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]))
for line in temperature_file
if not line.startswith(("#","@"))
])
)
plotly.offline.init_notebook_mode(connected=True)
fig = {
"data": [go.Scatter(x=x, y=y)],
"layout": go.Layout(title="Temperature during NVT Equilibration",
xaxis=dict(title = "Time (ps)"),
yaxis=dict(title = "Temperature (K)")
)
}
plotly.offline.iplot(fig)
###Output
_____no_output_____
###Markdown
*** Equilibrate the system (NPT)Equilibrate the **protein system** in **NPT** ensemble (constant Number of particles, Pressure and Temperature).- [Step 1](eqNPTStep1): Creating portable binary run file for system equilibration- [Step 2](eqNPTStep2): Equilibrate the **protein system** with **NPT** ensemble.- [Step 3](eqNPTStep3): Checking **NPT Equilibration** results. Plotting **system pressure and density** by time during the **NPT equilibration** process.*****Building Blocks** used: - [Grompp](https://biobb-md.readthedocs.io/en/latest/gromacs.htmlmodule-gromacs.grompp) from **biobb_md.gromacs.grompp** - [Mdrun](https://biobb-md.readthedocs.io/en/latest/gromacs.htmlmodule-gromacs.mdrun) from **biobb_md.gromacs.mdrun** - [GMXEnergy](https://biobb-analysis.readthedocs.io/en/latest/gromacs.htmlmodule-gromacs.gmx_energy) from **biobb_analysis.gromacs.gmx_energy** *** Step 1: Creating portable binary run file for system equilibration (NPT)The **npt** type of the **molecular dynamics parameters (mdp) property** contains the main default parameters to run an **NPT equilibration** with **protein restraints** (see [GROMACS mdp options](http://manual.gromacs.org/documentation/2018/user-guide/mdp-options.html)):- Define = -DPOSRES- integrator = md- dt = 0.002- nsteps = 5000- pcoupl = Parrinello-Rahman- pcoupltype = isotropic- tau_p = 1.0- ref_p = 1.0- compressibility = 4.5e-5- refcoord_scaling = com- gen_vel = noIn this particular example, the default parameters will be used: **md** integrator algorithm, a **time step** of **2fs**, **5,000 equilibration steps** with the protein **heavy atoms restrained**, and a Parrinello-Rahman **pressure coupling** algorithm.*Please note that for the sake of time this tutorial is only running 10ps of NPT equilibration, whereas in the [original example](http://www.mdtutorials.com/gmx/lysozyme/07_equil2.html) the simulated time was 100ps.*
###Code
# Grompp: Creating portable binary run file for NPT System Equilibration
from biobb_md.gromacs.grompp import Grompp
# Create prop dict and inputs/outputs
input_nvt_gro = local_path + "/" + pdbCode + '_nvt.gro'
output_gppnpt_tpr = local_path + "/" + pdbCode+'_gppnpt.tpr'
prop = {
'mdp':{
'nsteps':'5000',
#'tc_grps': "DNA Water_and_ions" # NOTE: uncomment this line if working with DNA
},
'simulation_type': 'npt'
}
# Create and launch bb
Grompp(input_gro_path=input_nvt_gro,
input_top_zip_path=input_genion_top_zip,
output_tpr_path=output_gppnpt_tpr,
input_cpt_path=output_nvt_cpt,
properties=prop).launch()
slurm.task_data['local_data_bundle'].add_file(output_gppnpt_tpr)
slurm.send_input_data(remote_path, overwrite= False)
###Output
_____no_output_____
###Markdown
Step 2: Running NPT equilibration
###Code
# Mdrun: Running NPT System Equilibration
python_import = 'from biobb_md.gromacs.mdrun import Mdrun'
# Create prop dict and inputs/outputs
input_nvt_tpr = pdbCode+'_gppnpt.tpr'
output_npt_trr = pdbCode+'_npt.trr'
output_npt_gro = pdbCode+'_npt.gro'
output_npt_edr = pdbCode+'_npt.edr'
output_npt_log = pdbCode+'_npt.log'
output_npt_cpt = pdbCode+'_npt.cpt'
files = {
'input_tpr_path' : input_nvt_tpr,
'output_trr_path' :output_npt_trr,
'output_gro_path' :output_npt_gro,
'output_edr_path' :output_npt_edr,
'output_log_path' :output_npt_log,
'output_cpt_path' :output_npt_cpt
}
# get_remote_py_script generates one-line python script appropriate for a
# single biobb execution
# Alternatively, a file containing a more complex script can be loaded
prop={'gmx_path':'gmx_mpi'}
slurm.submit(
'custom',
modules,
slurm.get_remote_py_script(python_import, files, 'Mdrun', properties=prop)
)
slurm.check_job(poll_time=5)
slurm.save(task_data_path)
#slurm.task_data['output_data_bundle'].file_stats
# NOT WORKING
print('\n'.join(slurm.get_logs()))
slurm.get_output_data(overwrite=False)
slurm.save(task_data_path)
###Output
_____no_output_____
###Markdown
Step 3: Checking NPT Equilibration resultsChecking **NPT Equilibration** results. Plotting **system pressure and density** by time during the **NPT equilibration** process.
###Code
# GMXEnergy: Getting system pressure and density by time during NPT Equilibration
from biobb_analysis.gromacs.gmx_energy import GMXEnergy
# Create prop dict and inputs/outputs
input_npt_edr = local_path + "/" + pdbCode + '_npt.edr'
output_npt_pd_xvg = pdbCode+'_npt_PD.xvg'
prop = {
'terms': ["Pressure","Density"]
}
# Create and launch bb
GMXEnergy(input_energy_path=input_npt_edr,
output_xvg_path=output_npt_pd_xvg,
properties=prop).launch()
import plotly
from plotly import subplots
import plotly.graph_objs as go
# Read pressure and density data from file
with open(output_npt_pd_xvg,'r') as pd_file:
x,y,z = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]),float(line.split()[2]))
for line in pd_file
if not line.startswith(("#","@"))
])
)
plotly.offline.init_notebook_mode(connected=True)
trace1 = go.Scatter(
x=x,y=y
)
trace2 = go.Scatter(
x=x,y=z
)
fig = subplots.make_subplots(rows=1, cols=2, print_grid=False)
fig.append_trace(trace1, 1, 1)
fig.append_trace(trace2, 1, 2)
fig['layout']['xaxis1'].update(title='Time (ps)')
fig['layout']['xaxis2'].update(title='Time (ps)')
fig['layout']['yaxis1'].update(title='Pressure (bar)')
fig['layout']['yaxis2'].update(title='Density (Kg*m^-3)')
fig['layout'].update(title='Pressure and Density during NPT Equilibration')
fig['layout'].update(showlegend=False)
plotly.offline.iplot(fig)
###Output
_____no_output_____
###Markdown
*** Free Molecular Dynamics SimulationUpon completion of the **two equilibration phases (NVT and NPT)**, the system is now well-equilibrated at the desired temperature and pressure. The **position restraints** can now be released. The last step of the **protein** MD setup is a short, **free MD simulation**, to ensure the robustness of the system. - [Step 1](mdStep1): Creating portable binary run file to run a **free MD simulation**.- [Step 2](mdStep2): Run short MD simulation of the **protein system**.- [Step 3](mdStep3): Checking results for the final step of the setup process, the **free MD run**. Plotting **Root Mean Square deviation (RMSd)** and **Radius of Gyration (Rgyr)** by time during the **free MD run** step. *****Building Blocks** used: - [Grompp](https://biobb-md.readthedocs.io/en/latest/gromacs.htmlmodule-gromacs.grompp) from **biobb_md.gromacs.grompp** - [Mdrun](https://biobb-md.readthedocs.io/en/latest/gromacs.htmlmodule-gromacs.mdrun) from **biobb_md.gromacs.mdrun** - [GMXRms](https://biobb-analysis.readthedocs.io/en/latest/gromacs.htmlmodule-gromacs.gmx_rms) from **biobb_analysis.gromacs.gmx_rms** - [GMXRgyr](https://biobb-analysis.readthedocs.io/en/latest/gromacs.htmlmodule-gromacs.gmx_rgyr) from **biobb_analysis.gromacs.gmx_rgyr** *** Step 1: Creating portable binary run file to run a free MD simulationThe **free** type of the **molecular dynamics parameters (mdp) property** contains the main default parameters to run an **free MD simulation** (see [GROMACS mdp options](http://manual.gromacs.org/documentation/2018/user-guide/mdp-options.html)):- integrator = md- dt = 0.002 (ps)- nsteps = 500000In this particular example, the default parameters will be used: **md** integrator algorithm, a **time step** of **2fs**, and a total of **50,000 md steps** (100ps).*Please note that for the sake of time this tutorial is only running 100ps of free MD, whereas in the [original example](http://www.mdtutorials.com/gmx/lysozyme/08_MD.html) the simulated time was 1ns (1000ps).*
###Code
# Grompp: Creating portable binary run file for mdrun
from biobb_md.gromacs.grompp import Grompp
# Create prop dict and inputs/outputs
input_npt_gro = local_path + '/' + pdbCode + '_npt.gro'
output_gppmd_tpr = local_path + '/' + pdbCode+'_gppmd.tpr'
prop = {
'mdp':{
'nsteps':'50000',
#'tc_grps': "DNA Water_and_ions" # NOTE: uncomment this line if working with DNA
},
'simulation_type': 'free'
}
# Create and launch bb
Grompp(input_gro_path=input_npt_gro,
input_top_zip_path=input_genion_top_zip,
output_tpr_path=output_gppmd_tpr,
input_cpt_path=output_npt_cpt,
properties=prop).launch()
slurm.task_data['local_data_bundle'].add_file(output_gppmd_tpr)
slurm.send_input_data(remote_path, overwrite= False)
###Output
_____no_output_____
###Markdown
Step 2: Running short free MD simulation
###Code
# Mdrun: Running NPT System Equilibration
python_import = 'from biobb_md.gromacs.mdrun import Mdrun'
# Create prop dict and inputs/outputs
input_npt_tpr = pdbCode+'_gppmd.tpr'
output_md_trr = pdbCode+'_md.trr'
output_md_gro = pdbCode+'_md.gro'
output_md_edr = pdbCode+'_md.edr'
output_md_log = pdbCode+'_md.log'
output_md_cpt = pdbCode+'_md.cpt'
files = {
'input_tpr_path' : input_npt_tpr,
'output_trr_path' :output_md_trr,
'output_gro_path' :output_md_gro,
'output_edr_path' :output_md_edr,
'output_log_path' :output_md_log,
'output_cpt_path' :output_md_cpt
}
# get_remote_py_script generates one-line python script appropriate for a
# single biobb execution
# Alternatively, a file containing a more complex script can be loaded
prop={'gmx_path':'gmx_mpi'}
slurm.submit(
'custom',
modules,
slurm.get_remote_py_script(python_import, files, 'Mdrun', properties=prop)
)
slurm.check_job(poll_time=50)
slurm.save(task_data_path)
slurm.get_output_data(overwrite=False)
###Output
_____no_output_____
###Markdown
Step 3: Checking free MD simulation resultsChecking results for the final step of the setup process, the **free MD run**. Plotting **Root Mean Square deviation (RMSd)** and **Radius of Gyration (Rgyr)** by time during the **free MD run** step. **RMSd** against the **experimental structure** (input structure of the pipeline) and against the **minimized and equilibrated structure** (output structure of the NPT equilibration step).
###Code
# GMXRms: Computing Root Mean Square deviation to analyse structural stability
# RMSd against minimized and equilibrated snapshot (backbone atoms)
from biobb_analysis.gromacs.gmx_rms import GMXRms
# Create prop dict and inputs/outputs
input_gppmd_tpr = opj(local_path, pdbCode + '_gppmd.tpr')
input_md_trr = opj(local_path, pdbCode + '_md.trr')
output_rms_first = opj(local_path, pdbCode+'_rms_first.xvg')
prop = {
'selection': 'Backbone',
#'selection': 'non-Water'
}
# Create and launch bb
GMXRms(input_structure_path=input_gppmd_tpr,
input_traj_path=input_md_trr,
output_xvg_path=output_rms_first,
properties=prop).launch()
# GMXRms: Computing Root Mean Square deviation to analyse structural stability
# RMSd against experimental structure (backbone atoms)
from biobb_analysis.gromacs.gmx_rms import GMXRms
# Create prop dict and inputs/outputs
input_gppmin_tpr = opj(local_path, pdbCode + '_gppmin.tpr')
input_traj_tpr = opj(local_path, pdbCode + '_md.trr')
output_rms_exp = pdbCode+'_rms_exp.xvg'
prop = {
'selection': 'Backbone',
#'selection': 'non-Water'
}
# Create and launch bb
GMXRms(input_structure_path=input_gppmin_tpr,
input_traj_path=input_md_trr,
output_xvg_path=output_rms_exp,
properties=prop).launch()
import plotly
import plotly.graph_objs as go
# Read RMS vs first snapshot data from file
with open(output_rms_first,'r') as rms_first_file:
x,y = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]))
for line in rms_first_file
if not line.startswith(("#","@"))
])
)
# Read RMS vs experimental structure data from file
with open(output_rms_exp,'r') as rms_exp_file:
x2,y2 = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]))
for line in rms_exp_file
if not line.startswith(("#","@"))
])
)
trace1 = go.Scatter(
x = x,
y = y,
name = 'RMSd vs first'
)
trace2 = go.Scatter(
x = x,
y = y2,
name = 'RMSd vs exp'
)
data = [trace1, trace2]
plotly.offline.init_notebook_mode(connected=True)
fig = {
"data": data,
"layout": go.Layout(title="RMSd during free MD Simulation",
xaxis=dict(title = "Time (ps)"),
yaxis=dict(title = "RMSd (nm)")
)
}
plotly.offline.iplot(fig)
# GMXRgyr: Computing Radius of Gyration to measure the protein compactness during the free MD simulation
from biobb_analysis.gromacs.gmx_rgyr import GMXRgyr
# Create prop dict and inputs/outputs
output_rgyr = opj(local_path, pdbCode+'_rgyr.xvg')
prop = {
'selection': 'Backbone'
}
# Create and launch bb
GMXRgyr(input_structure_path=input_gppmin_tpr,
input_traj_path=input_md_trr,
output_xvg_path=output_rgyr,
properties=prop).launch()
import plotly
import plotly.graph_objs as go
# Read Rgyr data from file
with open(output_rgyr,'r') as rgyr_file:
x,y = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]))
for line in rgyr_file
if not line.startswith(("#","@"))
])
)
plotly.offline.init_notebook_mode(connected=True)
fig = {
"data": [go.Scatter(x=x, y=y)],
"layout": go.Layout(title="Radius of Gyration",
xaxis=dict(title = "Time (ps)"),
yaxis=dict(title = "Rgyr (nm)")
)
}
plotly.offline.iplot(fig)
###Output
_____no_output_____
###Markdown
*** Post-processing and Visualizing resulting 3D trajectoryPost-processing and Visualizing the **protein system** MD setup **resulting trajectory** using **NGL**- [Step 1](ppStep1): *Imaging* the resulting trajectory, **stripping out water molecules and ions** and **correcting periodicity issues**.- [Step 2](ppStep2): Generating a *dry* structure, **removing water molecules and ions** from the final snapshot of the MD setup pipeline.- [Step 3](ppStep3): Visualizing the *imaged* trajectory using the *dry* structure as a **topology**. *****Building Blocks** used: - [GMXImage](https://biobb-analysis.readthedocs.io/en/latest/gromacs.htmlmodule-gromacs.gmx_image) from **biobb_analysis.gromacs.gmx_image** - [GMXTrjConvStr](https://biobb-analysis.readthedocs.io/en/latest/gromacs.htmlmodule-gromacs.gmx_trjconv_str) from **biobb_analysis.gromacs.gmx_trjconv_str** *** Step 1: *Imaging* the resulting trajectory.Stripping out **water molecules and ions** and **correcting periodicity issues**
###Code
# GMXImage: "Imaging" the resulting trajectory
# Removing water molecules and ions from the resulting structure
from biobb_analysis.gromacs.gmx_image import GMXImage
# Create prop dict and inputs/outputs
output_imaged_traj = opj(local_path, pdbCode+'_imaged_traj.trr')
prop = {
'center_selection': 'Protein',
'output_selection': 'Protein',
'pbc' : 'mol',
'center' : True
}
# Create and launch bb
GMXImage(input_traj_path=input_md_trr,
input_top_path=input_gppmd_tpr,
output_traj_path=output_imaged_traj,
properties=prop).launch()
###Output
_____no_output_____
###Markdown
Step 2: Generating the output *dry* structure.**Removing water molecules and ions** from the resulting structure
###Code
# GMXTrjConvStr: Converting and/or manipulating a structure
# Removing water molecules and ions from the resulting structure
# The "dry" structure will be used as a topology to visualize
# the "imaged dry" trajectory generated in the previous step.
from biobb_analysis.gromacs.gmx_trjconv_str import GMXTrjConvStr
# Create prop dict and inputs/outputs
input_md_gro = opj(local_path, pdbCode + '_md.gro')
output_dry_gro = opj(local_path, pdbCode+'_md_dry.gro')
prop = {
'selection': 'Protein'
}
# Create and launch bb
GMXTrjConvStr(input_structure_path=input_md_gro,
input_top_path=input_gppmd_tpr,
output_str_path=output_dry_gro,
properties=prop).launch()
###Output
_____no_output_____
###Markdown
Step 3: Visualizing the generated dehydrated trajectory.Using the **imaged trajectory** (output of the [Post-processing step 1](ppStep1)) with the **dry structure** (output of the [Post-processing step 2](ppStep2)) as a topology.
###Code
# Show trajectory
view = nglview.show_simpletraj(nglview.SimpletrajTrajectory(output_imaged_traj, output_dry_gro), gui=True)
view
###Output
_____no_output_____
###Markdown
Clean remote files amd remove credentials
###Code
slurm.clean_remote()
#credentials.remove_host_auth()
###Output
_____no_output_____ |
books/Python-for-Data-Analysis/06.ipynb | ###Markdown
将数据写出到文本格式
###Code
!cat pydata-book/ch06/ex5.csv
data = pd.read_csv('pydata-book/ch06/ex5.csv')
data
data.to_csv('test_out.csv')
!cat test_out.csv
!rm test_out.csv
import sys
data.to_csv(sys.stdout, sep='|')
data.to_csv(sys.stdout, na_rep='NULL')
data.to_csv(sys.stdout, index=False, header=False, na_rep='NULL')
data.to_csv(sys.stdout, index=False, columns=list('abc'), na_rep='NULL')
dates = pd.date_range('1/1/2000', periods=7)
dates
ts = Series(np.arange(7), index=dates)
ts.to_csv('test_out.csv')
!cat test_out.csv
!rm test_out.csv
Series.from_csv('pydata-book/ch06/tseries.csv', parse_dates=True)
###Output
_____no_output_____
###Markdown
手工处理分隔符格式
###Code
!cat pydata-book/ch06/ex7.csv
import csv
f = open('pydata-book/ch06/ex7.csv')
reader = csv.reader(f)
for line in reader:
print line
lines = list(csv.reader(open('pydata-book/ch06/ex7.csv')))
header, values = lines[0], lines[1:]
data_dict = {h: v for h, v in zip(header, zip(*values))}
data_dict
###Output
_____no_output_____
###Markdown
JSON数据 XML和HTML:Web信息收集
###Code
from lxml import html
parsed = html.parse(open('pydata-book/ch06/fdic_failed_bank_list.html'))
doc = parsed.getroot()
links = doc.findall('.//a')
links[15:20]
links[0].get('href')
links[0].text_content()
urls = [lnk.get('href') for lnk in doc.findall('.//a')]
urls[-10:]
from lxml import objectify
path = 'pydata-book/ch06/mta_perf/Performance_MNR.xml'
parsed = objectify.parse(open(path))
root = parsed.getroot()
data = []
skip_fields = ['PARENT_SEQ', 'INDICATOR_SEQ', 'DESIRED_CHANGE', 'DECIMAL_PLACES']
for elt in root.INDICATOR:
el_data = {}
for child in elt.getchildren():
if child.tag in skip_fields:
continue
el_data[child.tag] = child.pyval
data.append(el_data)
perf = DataFrame(data)
perf
###Output
_____no_output_____
###Markdown
二进制数据格式
###Code
frame = pd.read_csv('pydata-book/ch06/ex1.csv')
frame
###Output
_____no_output_____
###Markdown
使用HDF5格式
###Code
store = pd.HDFStore('mydata.h5')
store['obj1'] = frame
store['obj1_col'] = frame['a']
store
###Output
_____no_output_____
###Markdown
读取Excel文件
###Code
xls_file = pd.ExcelFile('pydata-book/ch06/ex1.xlsx')
table = xls_file.parse('Sheet1')
###Output
_____no_output_____
###Markdown
使用HTML和Web API 使用数据库
###Code
import sqlite3
query = """
CREATE TABLE test (a VARCHAR(20), b VARCHAR(20), c REAL, d INTEGER);
"""
con = sqlite3.connect(':memory:')
con.execute(query)
con.commit()
data = [('Atlanta', 'Georgia', 1.25, 5),
('Tallahassee', 'Florida', 2.6, 3),
('Sacramento', 'California', 1.7, 5)]
stmt = 'INSERT INTO test VALUES (?, ?, ?, ?)'
con.executemany(stmt, data)
con.commit()
cursor = con.execute('select * from test')
rows = cursor.fetchall()
rows
cursor.description
zip(*cursor.description)
DataFrame(rows, columns=zip(*cursor.description)[0])
import pandas.io.sql as sql
sql.read_frame('select * from test', con)
###Output
_____no_output_____
###Markdown
读写文本格式的数据
###Code
!cat pydata-book/ch06/ex1.csv
df = pd.read_csv('pydata-book/ch06/ex1.csv')
df
pd.read_table('pydata-book/ch06/ex1.csv', sep=',')
!cat pydata-book/ch06/ex2.csv
pd.read_csv('pydata-book/ch06/ex2.csv', header=None)
pd.read_csv('pydata-book/ch06/ex2.csv', names=['a', 'b', 'c', 'd', 'message'])
names = ['a', 'b', 'c', 'd', 'message']
pd.read_csv('pydata-book/ch06/ex2.csv', names=names, index_col='message')
!cat pydata-book/ch06/csv_mindex.csv
parsed = pd.read_csv('pydata-book/ch06/csv_mindex.csv', index_col=['key1', 'key2'])
parsed
list(open('pydata-book/ch06/ex3.txt'))
result = pd.read_table('pydata-book/ch06/ex3.txt', sep='\s+')
result # 列名比数据行少,推断第一列应该是index
!cat pydata-book/ch06/ex4.csv
pd.read_csv('pydata-book/ch06/ex4.csv', skiprows=[0, 2, 3])
!cat pydata-book/ch06/ex5.csv
result = pd.read_csv('pydata-book/ch06/ex5.csv')
result
pd.isnull(result)
result = pd.read_csv('pydata-book/ch06/ex5.csv', na_values=['NULL'])
result
sentinels = {'message': ['foo', 'NA'], 'something': ['two']}
pd.read_csv('pydata-book/ch06/ex5.csv', na_values=sentinels)
###Output
_____no_output_____
###Markdown
逐块读取文本文件
###Code
result = pd.read_csv('pydata-book/ch06/ex6.csv')
result
pd.read_csv('pydata-book/ch06/ex6.csv', nrows=5)
chunker = pd.read_csv('pydata-book/ch06/ex6.csv', chunksize=1000)
chunker
from pandas import Series, DataFrame
tot = Series([])
for piece in chunker:
tot = tot.add(piece['key'].value_counts(), fill_value=0)
# tot = tot.sort_values(ascending=False)
# tot[:10]
###Output
_____no_output_____ |
class-solved/MODULE_1-python_introduction-solved/10.02_CondicionesBucles-solved.ipynb | ###Markdown
2- Condiciones y BuclesCurso Introducción a Python - Tecnun, Universidad de Navarra En este documento nos centraremos en la creación de condiciones y bucles. A diferencia de otros lenguajes de programación no se utilizan llaves o sentencias *end* para determinar lo que está incluido dentro de la condición o el bucle. En Python, todo esto se hace mediante indentación. A continuación veremos unos ejemplos. Condiciones La sintaxis general de las condiciones es la siguiente:
###Code
a = 1
if a == 1:
print('La variable "a" vale 1.')
elif a == 2:
print('La variable "a" no vale 1, sino 2.')
else:
print('La variable "a" no vale 1 ni 2.')
###Output
La variable "a" vale 1.
###Markdown
Los comandos que se emplean para comparar son los siguientes:- **==** y **!=** para comprobar igualdad o desigualdad, respectivamente.- **\>** y **<** para comprobar si un elemento es estrictamente mayor o estrictamente menor que otro, respectivamente.- **>=** y **<=** para comprobar si un elemento es mayor o igual, o menor o igual que otro, respectivamente.En el caso de cumplir la condición, la comprobación devolverá una variable booleana *True* y ejecutará las líneas correspondientes a dicha condición. Si, por el contrario, la condición no se satisface, obtendremos una variable booleana *False* y no se ejecutaran las lineas correspondientes a la condición.En el caso de que fuera necesario comprobar si se cumplen varias condiciones a la vez se pueden utilizar los operadores booleanos **and** y **or**.
###Code
a = 2
b = 5
if a == 2 and b == 5:
print('Las variables "a" y "b" valen 2 y 5, respectivamente.\n')
else:
print('La variable "a" no vale 2 o la variable "b" no vale 5.\n')
if a == 2 or b == 5:
print('La variable "a" vale 2 o la variable "b" vale 5.')
else:
print('La variable "a" no vale 2 y la variable "b" no vale 5.')
###Output
Las variables "a" y "b" valen 2 y 5, respectivamente.
La variable "a" vale 2 o la variable "b" vale 5.
###Markdown
Aparte de este tipo de comprobaciones, se puede mirar si una lista contiene un elemento empleando el comando **in**.
###Code
lista = ['a', 'b', 'd']
if 'b' in lista:
print('El elemento "b" está contenido en "lista".')
else:
print('El elemento "b" no está contenido en "lista".')
###Output
El elemento "b" está contenido en "lista".
###Markdown
En el caso de querer negar condiciones, se puede emplear el operador booleano **not**.
###Code
a = 2
if not a == 2:
print('La variable "a" no vale 2.')
else:
print('La variable "a" vale 2.')
###Output
La variable "a" vale 2.
###Markdown
Siempre y cuando tenga sentido, estos operadores se pueden emplear con cualquier tipo de variables.
###Code
a = "casa"
b = [1, 2, 3]
if a != 'coche':
print('La variable no contiene un coche.')
else:
print('La variable contiene un coche.')
if b == [1, 2, 3]:
print('La variable contiene la lista [1, 2, 3]')
else:
print('La variable no contiene la lista [1, 2, 3]')
###Output
La variable no contiene un coche.
La variable contiene la lista [1, 2, 3]
###Markdown
Range
###Code
help(range)
range(10)
list(range(10))
list(range(2,10))
###Output
_____no_output_____
###Markdown
Bucles *for* La sintaxis general de los bucles *for* es la siguiente:
###Code
for i in [1,2,3]:
print(i)
for i in range(0, 3):
print(i)
###Output
0
1
2
###Markdown
Es importante darse cuenta de que el comando *range(0, 3)* crea una **sucesión de números entre 0 y 2**. Los comandos *break* y *continue* pueden resultar muy útiles. El primero termina el bucle en el instante de su ejecución, y el segundo termina la iteración actual del bucle y pasa a la siguiente.
###Code
for i in range(0, 10):
if i == 2:
continue
if i == 7:
break
print(i)
###Output
0
1
3
4
5
6
###Markdown
De la misma manera que ocurre con las condiones, las variables que empleamos como contador en los bucles no tienen por qué ser numéricas.
###Code
a = ['a', 'b', 'c', 'd', 'e', 'f', 'g']
for i in a:
print(i)
###Output
a
b
c
d
e
f
g
###Markdown
Bucles *while* La sintaxis general de los bucles *while* es la siguiente:
###Code
i = 1
while i <= 10:
print(i)
i += 1
###Output
1
2
3
4
5
6
7
8
9
10
###Markdown
El operador **+=** aumenta el valor de la variable i en el valor que escribamos a su derecha en cada iteración. Por el contrario, el operador **-=** lo disminuye.
###Code
i = 10
while i >= 0:
print(i)
i -= 2
###Output
10
8
6
4
2
0
###Markdown
Al igual que con los bucles *for*, los operadores *break* y *continue* son válidos en estos bucles.
###Code
i = -1
while i <= 10:
i += 1
if i == 2:
continue
if i == 7:
break
print(i)
###Output
0
1
3
4
5
6
|
7_image_classification_using_keras.ipynb | ###Markdown
Image Classification with Keras *David B. Blumenthal*, *Suryadipto Sarkar* What is Tensorflow?Tensorflow is an open-source end-to-end platform that facilitates designing and deploying Machine Learning models using Python. What is Keras?Keras is an API built on top of TensorFlow, that supports deep learning.
###Code
!unzip PET-IMAGES.zip
# IMPORT REQUIRED LIBRARIES:
# --------------------------
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten, Conv2D, MaxPooling2D
import pickle
import numpy as np
from numpy import genfromtxt
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import os
import cv2
from PIL import Image
import imageio
import pandas as pd
import random
import pickle
import pandas as pd
from sklearn.preprocessing import LabelBinarizer
from skimage.io import imread_collection
import glob
###Output
_____no_output_____
###Markdown
**Note:*** This is the way to mount drive and read images directly from Google Drive. However, since we have 24,000 images, this will take a while. Therefore, I will show you just this first step on Spyder as I can access the files locally.* Then, we can just upload the numpy arrays here and work on those. * Anyway, you only need to do this the very first time that you read in the data. * .py script can be accessed over this link: Classification DatasetWe will make use of the popular 'Cats vs Dogs' classification dataset.Dataset download link: https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip Exercise-1:* Read all of the .jpeg and .png images from a folder.* Save all of the images in a single list (i.e., a list of image arrays). Note that this is the most common way of working with data in python. We often save all of the data into numpy lists or dataframes or similar structures.There are many ways of doing this. I have made use of **opencv** as it is one of the most popular Computer Vision libraries in python. Read more at: https://github.com/opencv/opencv Solution for Exercise 1 Handling pickle (or npy) data:* We often need to run ML algorithms many times with little tweaks in the model.* How do we achieve this? Of course, we could read the data every time and save it as a list as shown above.* However, this is a very time-consuming approach.* What is a better approach? Saving the data as a list of image arrays once, and save that for later use.Read about the **`.pickle`** and **`.npy`** file types. (Here, we will use pickle - don't forget to import the **** library.) Writing data to pickle file:
###Code
# # SYNTAX:
# # -------
# pickle_out=open("data.pickle","wb")
# pickle.dump(data,pickle_out)
# pickle_out.close()
###Output
_____no_output_____
###Markdown
Reading data from pickle file:
###Code
# # SYNTAX:
# # -------
# data=pickle.load(open("data.pickle","rb"))
###Output
_____no_output_____
###Markdown
Exercise-2:* Read in the training data(X) and corresponding labels(y). Solution for Exercise 2 Randomizing the data: Exercise-3:* Randomize the data while preserving the sample-wise label information. Solution for Exercise 3 Exercise-4:* Save shuffled data as pickle file for later use. Solution for Exercise 4 **CONVOLUTIONAL NEURAL NETWORKS (CNNs):** Why CNNs?* Convert data to embeddings/ features* Example: Converting images from pixel space to feature space* Enhances learning, reduces dimensionality, represents the data better Components/ Layers: **Question**: Looking at the kernel matrix provided above, what kind of pattern do you think it is meant to detect? * Answer: Vertical edges I. Convolutional Layer:* Helps extract local patterns in the data (here, image).* Also helps reduce the number of features. But that is a byproduct of the convolution operation, it is not the main objective. The main objective is to extract meaningful local patterns.+ **Note: If the image is an RGB (3-channel image), the convolved image is also 3-channel. If the input image is a gray (1-channel) image, the convolved image is also single-channeled.** + This is because the kernel is applied on each channel separately for convolution. **An oversimplified example of convolution:**(Note: Kernel size 4*4, padding 0, stride length=1) II. Pooling Layer:* The main function of the pooling layer is to reduce dimensionality.* Two popular types of pooling: MaxPooling, and AveragePooling.* AveragePooling also helps in noise reduction. **A simple example of Pooling:**(Note: Pooling window size 3*3) III. Feedforward Layer:* Standard Neural Network architecture used for classification **A simple fully-connected, feed forward neural network:** Note on the 'Flatten' layer:* This is really not a layer in the conventional sense, although it is defined in the tensorflow.keras.layers.* This is a function used to convert the features (or weights) after pooling, to be fed into the aforementioned feedforward neural network for classification. * We will see this a little later when we design the model. **For interested readers:** Basic ideas behind machine learning and AI:* What do we mean by 'learning' and 'intelligent' systems?* What are the three main types of machine learning, and what are the differences? Artificial neural networks:* NN-related terms: Neurons, layers, activation functions, fully connected networks, multi-layer perceptron* Hyperparameter tuning and model improvement basics: Learning rate, no. of neurons oer layer, how to set model size (i.e., no. and type of layers)* Learning-related: Learning rate, backpropagation, gradient descent Optional reading (slightly more advanced):* What is transfer learning? What are pre-trained models, and how to use them? Why pre-trained models?* What is overfitting? How to 1. detect 2. tackle overfitting? * Regularization, Dropout, resampling, oversampling (read about 'SMOTE') and undersampling, data augmentation techniques. **Designing the model:** A schematic representation of our model:
###Code
X=pickle.load(open("X.pickle","rb"))
y=pickle.load(open("y.pickle","rb"))
# Normalize data
X=np.asarray(X)/255.0
# X=X.tolist()
y = np.array(y)
model=Sequential()
model.add( Conv2D(64,(3,3),input_shape=X.shape[1:]) )
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add( Conv2D(32,(2,2)) )
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('sigmoid'))
# model.add(Dense(64))
# model.add(Activation('sigmoid'))
model.add(Dense(1))
model.add(Activation("sigmoid"))
model.compile(loss="binary_crossentropy",
optimizer="adam",
metrics=['accuracy'])
# callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=3)
# history=model.fit(X, y, batch_size=32, shuffle=True, sample_weight=None, epochs=50,validation_split=0.1, verbose = 1, callbacks=[callback]) # seed=100,
history=model.fit(X, y, batch_size=32, shuffle=True, sample_weight=None, epochs=50,validation_split=0.1, verbose = 1) # seed=100,
# model.fit(X,y,batch_size=32,epochs=25,validation_split=0.1)
# list all data in history
print(history.history.keys())
# summarize history for accuracy
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
###Output
_____no_output_____
###Markdown
Exercise-5:* How many neurons are there in each of the aforementioned layers? Solution for Exercise 5 Exercise-6:* Why have we not used a Softmax layer? Solution for Exercise 6 Solution for Exercise 1
###Code
# Simple way to read in all of the data:
# --------------------------------------
####################################################################################
CATS_folder='PET-IMAGES/Cat' # CATS_folder='Cat_folder_path' | # If reading from Drive: CATS_folder='/content/drive/MyDrive/PetImages/Cat'
DOGS_folder='PET-IMAGES/Dog' # DOGS_folder='Dog_folder_path' | # If reading from Drive: DOGS_folder='/content/drive/MyDrive/PetImages/Dog'
imdir = CATS_folder # or, DOGS_folder
ext = ['png', 'jpg'] # add other image formats for other datasets
files = []
[files.extend(glob.glob(imdir + '*.' + e)) for e in ext]
images = [cv2.imread(file) for file in files]
####################################################################################
###Output
_____no_output_____
###Markdown
Back to Exercise 1 Solution for Exercise 2
###Code
X=pickle.load(open("X.pickle","rb"))
y=pickle.load(open("y.pickle","rb"))
###Output
_____no_output_____
###Markdown
Back to Exercise 2 Solution for Exercise 3
###Code
def Shuffle(X, y):
X_shuffled=[]
y_shuffled=[]
length=len(y)
index=list(range(length))
random.Random(12).shuffle(index)
for i in range(length):
X_shuffled.append(X[index[i]])
y_shuffled.append(y[index[i]])
return X_shuffled, y_shuffled
X, y=Shuffle(X, y)
###Output
_____no_output_____
###Markdown
Back to Exercise 3 Solution for Exercise 4
###Code
# Save the training data
pickle_out=open("X_save.pickle","wb")
pickle.dump(X,pickle_out)
pickle_out.close()
# Save the training labels
pickle_out=open("y_save.pickle","wb")
pickle.dump(y,pickle_out)
pickle_out.close()
###Output
_____no_output_____ |
tasks/solution_03_titanic.ipynb | ###Markdown
TitanicУ рамках цього завдання я виконаю аналіз даних та певну їх підготовку, а саме: - вилучення даних: завантажити набір даних та привести дані у зручний табличний формат (Pandas DataFrame) - зробити перше ознайомче дослідження даниx: поглянути на описову статистику та на розподіл деяких ознак- очищення: заповнити деякі пропущені значення- візуальний аналіз: створити кілька діаграм, які (сподіваюсь) допоможуть визначити кореляцію та іншу інформацію- після аналізу даних виконати проектування ознак: видалити непотрібні ознаки, конвертувати категоріальні ознаки в числові, об'єднати деякі ознаки в одну тощоКоли дані будуть готові, я застосую метод логістичної регресії для прогнозування цільової категоріальної змінної, тобто для класифікації. Мета класифікації визначити, чи виживе пасажир на Титаніку.Набір даних та додаткову інформацію щодо цього набору даних, включаючи більше навчальних прикладів, можна знайти на [Kaggle](https://www.kaggle.com/c/titanic) Import modulsПочнемо з завантаження необхідних модулів, щоб розпочати наш експеримент. Я буду використовувати numpy, pandas, seaborn and matplotlib.
###Code
# Imports
# Data analysis and math
import numpy as np
import pandas as pd
# Plotting
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set_style("whitegrid")
sns.set_context({"figure.figsize": (4, 4)})
# Preprocessing
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
# Machine learning
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import KFold
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import learning_curve
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
Data FetchingЗавантажимо набір даних Titanic.Цей набір даних можна знайти на [kaggle](https://www.kaggle.com/c/titanic/data) або на [GitHub](https://github.com/egoliuk/hlll_course/tree/master/tasks/data/titanic)
###Code
test_y = pd.read_csv('https://raw.githubusercontent.com/egoliuk/hlll_course/master/tasks/data/titanic/gender_submission.csv')
test_X = pd.read_csv('https://raw.githubusercontent.com/egoliuk/hlll_course/master/tasks/data/titanic/test.csv')
train = pd.read_csv('https://raw.githubusercontent.com/egoliuk/hlll_course/master/tasks/data/titanic/train.csv')
###Output
_____no_output_____
###Markdown
Об'єднаємо тренувальну і тестувальну вибірки в один набір даних.
###Code
test = pd.merge(test_X, test_y, on='PassengerId', how='inner')
test = test[train.columns]
dataset = pd.concat([train, test])
print('Довжина всього набору даних: {:.0f}'.format(dataset.shape[0]))
###Output
Довжина всього набору даних: 1309
###Markdown
Data ExplorationПоглянемо на набір даних, та подивимось на ознаки, які описують наші спостереження.
###Code
dataset.sample(3)
###Output
_____no_output_____
###Markdown
Маємо 12 характеристик:- **PassengerId** - ідентифікатор, присвоєний мандрівнику на човні- **Survival** - цільова ознака, врятувався пасажир, чи ні. 0 = No, 1 = Yes- **Pclass** - соціально економічний статус пасажира. 1 = Upper, 2 = Middle, 3 = Lower- **Name** - ім'я пасажира- **Sex** - стать пасажира. male, female- **Age** - вік пасажира в роках. Вік є дробовим, якщо менше 1. Якщо вік оцінюється, він виглядає у вигляді xx.5- **SibSp** - кількість братів і сестер чи подружжя, які подорожують із пасажиром. Sibling = брат, сестра, зведений брат чи сестра; Spouse = чоловік, дружина (коханки та наречені були проігноровані).- **Parch** - кількість батьків чи дітей, які подорожують із пасажиром. Parent = мати, батько; Child = дочка, син, падчера, пасинок. Деякі діти подорожували лише з нянею, отже, для них Parch = 0.- **Ticket** - номер квитка пасажира- **Fare** - вартість квитка пасажира- **Cabin** - номер кабіни пасажира- **Embarked** - порт посадки. C = Cherbourg, Q = Queenstown, S = SouthamptonПодивимось на описову статистику кількісних ознак так категоріальних ознак виражених числами:
###Code
dataset[['Survived', 'Pclass', 'Age', 'SibSp', 'Parch', 'Fare']].describe()
###Output
_____no_output_____
###Markdown
З описової статистики видно, що узагальнений портрет пасажира Титаніка - це 30 річна людина, без родичів на борту, яка подорожує середнім та нижчим класом. При цьому шанс вижити склав 37.7%
###Code
sns.countplot(x='Survived', data=dataset)
###Output
_____no_output_____
###Markdown
Більшість людей не вижили.Давайте розглянемо кількість людей, які вижили за статтю.
###Code
sns.set_context({"figure.figsize": (8, 4)})
sns.countplot(x='Survived', hue='Sex', data=dataset)
###Output
_____no_output_____
###Markdown
Тут ми можемо побачити, що загинуло більше чоловіків, ніж жінок, і що більшість жінок вижили.Тепер давайте порівняємо кількість виживших за класом.
###Code
sns.countplot(x='Survived',hue='Pclass', data=dataset)
###Output
_____no_output_____
###Markdown
Тут ми можемо побачити, що у 1-го класу було більше виживших ніж загинувших. У 2-го класу навпаки, і більшість 3-го класу загинула.Давайте подивимось на розподіл вартості квитка.
###Code
# plt.hist(x='Fare', data=dataset, bins=40)
dataset['Fare'].hist(bins=40)
###Output
_____no_output_____
###Markdown
Here we can see that most people paid under 50, but there are some outliers like the people at the $500 range. This is explained by the difference in the number of people in each class. The lowest class, 3, has the most people and the highest class has the least. The lowest class paid the lowest fare so there are more people in this category.Очевидно, що більшість людей сплатили за квитки меньше 50 доларів, в цій категорії людей майже весь 2-й та 3-й класи та й більшість 1-го класу. Але є такі пасажири, які сплатили 500 доларів. Вони виглядають певним викидом навіть для 1-го класа і поки не зрозуміло, що обумовило високу вартість квитка.
###Code
dataset[dataset['Pclass'] == 1]['Fare'].hist(bins=40)
###Output
_____no_output_____
###Markdown
Data Preprocessing Data Cleaning Missing DataНарешті, давайте переглянемо кількість відсутніх даних.
###Code
dataset.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 1309 entries, 0 to 417
Data columns (total 12 columns):
PassengerId 1309 non-null int64
Survived 1309 non-null int64
Pclass 1309 non-null int64
Name 1309 non-null object
Sex 1309 non-null object
Age 1046 non-null float64
SibSp 1309 non-null int64
Parch 1309 non-null int64
Ticket 1309 non-null object
Fare 1308 non-null float64
Cabin 295 non-null object
Embarked 1307 non-null object
dtypes: float64(2), int64(5), object(5)
memory usage: 172.9+ KB
###Markdown
І за допомогою теплової карти.
###Code
fig, ax = plt.subplots(figsize=(12,8))
sns.heatmap(dataset.isnull(), cmap='coolwarm', yticklabels=False, cbar=False, ax=ax)
###Output
_____no_output_____
###Markdown
Схоже, що ознаки Fare, Embarked, Age та Cabin мают пропущені значення. Нам треба підготувати дані до використання моделлю, а тому треба очистити дані від пропущених значень. Поглянемо що ми можемо з цим зробити. Embarked nullsСпочатку почнемо на NaNs у змінній Embarked. У нас двоє пасажирів без місця посадки. Обидва пасажири вижили і мають однаковий номер квитка. Вони також належали до першого класу.
###Code
dataset[dataset['Embarked'].isnull()]
###Output
_____no_output_____
###Markdown
Давайте спробуємо зрозуміти в якому порту сіли ці пасажирки. Спочатку розглянемо шанси на виживання залежно від порту посадки.
###Code
fig, (ax1, ax2, ax3) = plt.subplots(1,3, figsize=(15,5))
# Plot the number of occurances for each embarked location
sns.countplot(x='Embarked', data=dataset, ax=ax1)
# Plot the number of people that survived by embarked location
sns.countplot(x='Survived', hue = 'Embarked', data=dataset, ax=ax2, order=[1,0])
# Group by Embarked, and get the mean for survived passengers for each
# embarked location
embark_pct = dataset[['Embarked','Survived']].groupby(['Embarked'],as_index=False).mean()
# Plot the above mean
sns.barplot(x='Embarked',y='Survived', data=embark_pct, order=['S','C','Q'], ax=ax3)
###Output
_____no_output_____
###Markdown
Тут ми можемо побачити, що більшість пасажирів сіли на Титанік в порту S, і через це більшість пасажирів, які вижили, були з порту S. Однак, коли ми дивимось на середню кількість людей, що вижили, порівняно із загальною кількістю людей, які сіли в певному порті, S мав найнижчий рівень виживання.Цього не достатньо, щоб зробити висновок, в якому порту зайшли на борт вище зазначені люди: miss. Amelie та mrs. George Nelson. Давайте розглянемо інші змінні, які можуть вказувати, де пасажири сіли на корабель.Давайте подивимось, чи хтось ще має той самий номер квитка.
###Code
dataset[dataset['Ticket'] == '113572']
###Output
_____no_output_____
###Markdown
Немає інших користувачів, які мають такий самий номер квитка. Давайте шукатимемо людей того ж класу, які сплатили аналогічну вартість квитка.
###Code
dataset[(dataset['Pclass'] == 1) & (dataset['Fare'] > 75) & (dataset['Fare'] < 85)].groupby('Embarked')['PassengerId'].count()
###Output
_____no_output_____
###Markdown
З людей, які мають той самий клас і платили подібний тариф, 25 людей приїхали з C, а 18 людей приїхали з S.Тепер, враховуючи, що більшість людей того ж класу, квитки яких мають подібну вартість, зайшли на борт в порту С, і що люди, які виїхали з С, мають найвищий коефіцієнт виживання, ми припустимо, що ці пасажири, ймовірно, сіли на борт в порту C. Змінимо їх NaN значення на C.
###Code
# Set Value
dataset.at[dataset['Embarked'].isnull(), 'Embarked'] = 'C'
# Verify
dataset[dataset['Embarked'].isnull()]
###Output
_____no_output_____
###Markdown
Fare nullsТепер давайте розберемося з пропущеними значеннями у стовпці Fare.
###Code
dataset[dataset['Fare'].isnull()]
###Output
_____no_output_____
###Markdown
Візуалізуємо гістограму тарифів, сплачених пасажирами 3-го класу, які сіли з Southampton.
###Code
fig,ax = plt.subplots(figsize=(8,5))
dataset[(dataset['Pclass'] == 3) & (dataset['Embarked'] == 'S')]['Fare'].hist(bins=100, ax=ax)
plt.xlabel('Fare')
plt.ylabel('Frequency')
plt.title('Histogram of Fare for Pclass = 3, Embarke = S')
plt.show()
print ("The top 5 most common fares:")
dataset[(dataset['Pclass'] == 3) & (dataset['Embarked'] == 'S')]['Fare'].value_counts().head()
###Output
The top 5 most common fares:
###Markdown
Заповнимо пропущене значення найпоширенішим тарифом - 8,05 дол.
###Code
# Fill value
dataset.at[dataset['Fare'].isnull(), 'Fare'] = 8.05
# Verify
dataset[dataset['Fare'].isnull()]
###Output
_____no_output_____
###Markdown
Age nullsТепер давайте заповнимо пропущені дані про вік. Один із способів заповнення - заповнити NaN середнім значенням по колонці. Можна вдосконалити цей підхід, наприклад, заповнити середнім значенням віку для певного класу пасажирів, оскільки пасажири мають різний розподіл їх віку, залежно від їх класу.
###Code
plt.figure(figsize=(12,7))
sns.boxplot(x='Pclass', y='Age', data=dataset)
facet = sns.FacetGrid(dataset, hue='Pclass', aspect=4)
facet.map(sns.kdeplot, 'Age', shade=True)
facet.set(xlim=(0, dataset['Age'].max()))
facet.add_legend()
dataset.groupby('Pclass')['Age'].mean()
###Output
_____no_output_____
###Markdown
Ми бачимо, що чим вище клас, тим вище середній вік, що має сенс. Отже, ми можемо заповнити вікові значення NaN, використовуючи вищезазначені середні значення.
###Code
def fixNaNAge(age, pclass):
if age == age:
return age
if pclass == 1:
return 39
elif pclass == 2:
return 30
else:
return 25
# Fill value
dataset['Age'] = dataset.apply(lambda row: fixNaNAge(row['Age'], row['Pclass']), axis=1)
# Verify
dataset[dataset['Age'].isnull()]
facet = sns.FacetGrid(dataset, hue='Pclass', aspect=4)
facet.map(sns.kdeplot, 'Age', shade=True)
facet.set(xlim=(0, dataset['Age'].max()))
facet.add_legend()
facet = sns.FacetGrid(dataset, hue='Survived', aspect=4)
facet.map(sns.kdeplot, 'Age', shade=True)
facet.set(xlim=(0, dataset['Age'].max()))
facet.add_legend()
fig, ax = plt.subplots(1,1,figsize=(18,4))
age_mean = dataset[['Age','Survived']].groupby(['Age'],as_index=False).mean()
sns.barplot(x='Age', y='Survived', data=age_mean)
###Output
_____no_output_____
###Markdown
Cabin nullsНарешті, для стовпчика Cabin нам не вистачає занадто багато інформації, щоб правильно її заповнити.
###Code
print(f"Значення Cabin пропущено для {dataset[dataset['Cabin'].isnull()].shape[0]} пасажирів")
###Output
Значення Cabin пропущено для 1014 пасажирів
###Markdown
У такому разі ми можемо повністю видалити цей стовпчик:
###Code
dataset.drop('Cabin', axis=1, inplace=True)
dataset.sample(3)
###Output
_____no_output_____
###Markdown
Отже ми почистили/заповнили всі пропущені дані.
###Code
dataset.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 1309 entries, 0 to 417
Data columns (total 11 columns):
PassengerId 1309 non-null int64
Survived 1309 non-null int64
Pclass 1309 non-null int64
Name 1309 non-null object
Sex 1309 non-null object
Age 1309 non-null float64
SibSp 1309 non-null int64
Parch 1309 non-null int64
Ticket 1309 non-null object
Fare 1309 non-null float64
Embarked 1309 non-null object
dtypes: float64(2), int64(5), object(4)
memory usage: 162.7+ KB
###Markdown
Data RelationshipsТепер, коли ми провели перший аналіз та почистили наші дані від пропусків, давайте подивимось на взаємозв’язок між різними стовпцями.Для дослідження взаємозв'язків між різними ознаками ми можемо використовувати діаграму розсіювання(scatter plots) та теплові карти кореляції(correlation heatmaps) між різними атрибутами. Ми розглянемо кореляційну теплову карту різних ознак, виключаючи цільову змінну Survived. Можемо побудувати карту кореляції лише для числових атрибутів.
###Code
corrmat = dataset.corr()
f, ax = plt.subplots(figsize=(12, 9))
sns.heatmap(corrmat, vmax=.8, cbar=True, annot=True, square=True, fmt='.2f', annot_kws={'size': 10})
plt.title('Pearson Correlation of Features', y=1.05, size=15)
plt.show()
###Output
_____no_output_____
###Markdown
Ми можемо побачити, що:- є помітна від'ємна кореляція між цільовою ознакою та класом пасажира Pclass- є певна кореляція між цільовою ознакою та вартістю квитка Fare- Pclass та Fare мають помітну від'ємну кореляцію, що пояснюється тим, що квитки на перший клас дорожче квитків на середній і нижчий класи- Pclass та Age мають помітну від'ємну кореляцію, схоже що літні заможні люди можуть частіше дозволити собі подорожувати першим класом- SibSp та Parch мають певну кореляцію, схоже певні пасажири подорожували цілими родинами- PassengerId не корелює з іншими ознаками взагалі, можна буде позбутись цієї ознакиПобудуємо графік розсіювання для наших ознак:
###Code
#scatterplot
sns.set()
sns_plot = sns.pairplot(dataset, height = 2)
plt.show()
###Output
_____no_output_____
###Markdown
Видно певні тенденції. Так, наприклад, пасажири нижчого класу частіше мали на борту Титаніка велику кількість своїх братів, сестер, дітей, батьків та подружжя. При цьому старші пасажири рідше подорожували з братами, сестрами чи подружжям. Але жодної цікавої тенденції, яка б свідчила про очевидний зв'язок з цільовою змінною я не бачу. Для категориальних атрибутів ми можемо використовувати те, що називається перехресною таблицею(cross-tabulation).
###Code
pd.crosstab(dataset['Pclass'], dataset['Sex'], margins=True)
pd.crosstab(dataset['Pclass'], dataset['Embarked'], margins=True)
pd.crosstab(dataset['Survived'], dataset['Embarked'], margins=True)
pd.crosstab(dataset['Embarked'], dataset['Sex'], margins=True)
pd.crosstab(dataset['Survived'], dataset['Pclass'], margins=True)
pd.crosstab(dataset['Survived'], dataset['Sex'], margins=True)
###Output
_____no_output_____
###Markdown
Отже ми бачимо, що:- В кожному класі було більше чоловіків, ніж жінок. При цьому в нижчому класі чоловіків більше ніж в двічі- В порті S зайшло найбільше пасажирів, а саме більшість преставників 2-го класу, більше половини представників 1-го класу та 2/3 представників 3-го класу. Якщо ми порівняємо це з кореляцією між портом та кількістю загиблих, бачимо, що 2/3 пасажирів з порту S загинули. При цьому пасажири з інших портів C та Q мали рівні шанси вижити чи загинути. Сумнівно, щоб існував причинно-наслідковий зв'язок між самим портом та шансом вижити, скоріше високий відсоток загинувших пасажирів з порту S обумовлений високим відсотком пасажирів низького та середнього класу. А може, тим що більшість пасажирів з цього порту були чоловіками. Так ми бачимо, що майже всі пасажири з порту Q були нижчого класу. Але співвідношення виживших і загинувших пасажирів з цього порту майже рівне. Якщо ми подивимось на перехресну таблицю кореляції статі та порту, то побачимо, що співвідношення чоловіків та жінок з порту Q теж майже рівне- Співвідношення загиблих та виживших до статі та класу пасажирів, демонструє можливу залежністьПорівнювати дані представлені в таблиці не завжди зручно, візуалізуємо це:
###Code
sns.countplot(x="Survived", hue="Sex", data=dataset)
sns.countplot(x="Survived", hue="Pclass", data=dataset)
###Output
_____no_output_____
###Markdown
Отже, ми проаналізували набір даних і побачили певні взаємозв'язки між атрибутами, також видно, що деякі ознаки не впливають на цільову змінну. Настав час для проектування ознак. Feature Engineering Adding featuresІмена пасажирів містять приставки та титули (такі як Mr, Miss, Dona, Master тощо), які у деяких випадках вказують на соціальний статус особи, який, можливо, був важливим фактором виживання під час аварії. Наприклад, ім'я *Braund, Mr. Owen Harris Heikkinen* містить префікс *Mr.*Створимо додаткову колонку Title, де будемо зберігати ці титули пасажирів.
###Code
Title_Dictionary = {
"Capt": "Officer",
"Col": "Officer",
"Major": "Officer",
"Jonkheer": "Nobel",
"Don": "Nobel",
"Sir" : "Nobel",
"Dr": "Officer",
"Rev": "Officer",
"the Countess": "Nobel",
"Dona": "Nobel",
"Mme": "Mrs",
"Mlle": "Miss",
"Ms": "Mrs",
"Mr" : "Mr",
"Mrs" : "Mrs",
"Miss" : "Miss",
"Master" : "Master",
"Lady" : "Nobel"
}
dataset['Title'] = dataset['Name'].apply(lambda x: Title_Dictionary[x.split(',')[1].split('.')[0].strip()])
dataset.sample(3)
dataset[dataset['Title'].isnull()]
###Output
_____no_output_____
###Markdown
Поглянемо, чи є залежність між титулом та шансом вижити:
###Code
sns.countplot(x="Survived", hue="Title", data=dataset)
###Output
_____no_output_____
###Markdown
Як бачимо у міс та місіс було більше шансів вижити, в той час як містери та офіцери мали низькі шанси на виживання. Aggregating FeaturesДодамо поле FamilySize, яке агрегує інформацію в полях, що вказують на наявність партнера (Parch) або родича (Sibsp) на борту Титаніка.
###Code
dataset['FamilySize'] = dataset['SibSp'] + dataset['Parch']
dataset.sample(3)
###Output
_____no_output_____
###Markdown
Як ми вже бачили під час візуального аналізу вище, стать пасажира була важливим фактором виживання під час аварії на Титаніку. Так само як і вік пасажира. Це можна пояснити, наприклад, пільговим ставленням до жінок та дітей. Давайте додамо нову ознаку для врахування статі та віку пасажирів.
###Code
def getPerson(passenger):
age, sex = passenger
return 'child' if age < 16 else sex
dataset['Person'] = dataset[['Age', 'Sex']].apply(getPerson, axis=1)
dataset.sample(10)
###Output
_____no_output_____
###Markdown
Подивимось, які шанси на виживання були у дітей:
###Code
sns.countplot(x="Pclass", hue="Person", data=dataset)
sns.countplot(x="Survived", hue="Person", data=dataset)
###Output
_____no_output_____
###Markdown
Попри те, що більшість дітей були з середнього та нижчого класів, пасажири яких мали не високі шанси вижити, шанси дітей вижити були трохи більші ніж загинути. Dropping Useless FeaturesТепер позбудимось ознак, які були об'єднані в іншу ознаку або не мають помітного впливу на цільову ознаку.Ознаки, які ми видалемо - це Name, Sex, Ticket, SibSp, Parch.Ознаку PassengerID я видалю пізніше, перед самим тренуванням, після того як розіб'ю вибірку на тренувальну і тестову по PassengerID, так само як вони були розбиті з самого початку, перед тим як я поєднав початкові тестову і тренувальну вибірки в один набір даних.
###Code
features_to_drop = ['Name', 'Sex', 'Ticket', 'SibSp', 'Parch']
dataset.drop(labels=features_to_drop, axis=1, inplace=True)
dataset.sample(3)
# corrmat = dataset.corr()
# f, ax = plt.subplots(figsize=(12, 9))
# sns.heatmap(corrmat, vmax=.8, cbar=True, annot=True, square=True, fmt='.2f', annot_kws={'size': 10})
# plt.title('Pearson Correlation of Features', y=1.05, size=15)
# plt.show()
#scatterplot
# sns.set()
# sns_plot = sns.pairplot(dataset, height = 2)
# plt.show()
###Output
_____no_output_____
###Markdown
Convert Categorical VariablesКатегоріальні дані потрібно перетворити в числові значення, оскільки scikit-learn приймає лише числові значення як вхідні дані.Ми могли б представляти категоріальні значення за допомогою чисел, але це кодування передбачає впорядковану залежність між значеннями в категорії, наприклад, як в якійсь рейтинговій системі оцінювання. До таких впорядкованих категоріальних ознак можна віднести Pclass. Якщо наші категоріальні дані не мають порядку, тоді ми можемо кодувати категоричні значення, замінивши категоріальну змінну на декілька [фіктивних змінних](https://en.wikipedia.org/wiki/Dummy_variable_(statistics)). Для перетворення категоріальної змінної на фіктивні скористуємось методом [one-hot-encoding](https://en.wikipedia.org/wiki/One-hot). В залежності від того, чи є категорії взаємовиключні, чи одне спостереження може належати декільком категоріям одночасно, кількість нових фіктивних змінних буде на 1 меньша кількості категорій, або дорівнювати кількості категорій відповідно. Отже, у нас є чотири категоричні ознаки: Pclass, Embarked, Title та Person. Ми можемо конвертувати їх методом one-hot-encoding, так кожна категорія для кожної функції стає новим стовпцем. Стовпець категорії отримає значення 1, якщо оригінальна функція належала цій категорії. Решта стовпців отримає значення 0.
###Code
# Create dummy features for each categorical feature
dummies_person = pd.get_dummies(dataset['Person'], prefix='Person')
dummies_embarked = pd.get_dummies(dataset['Embarked'], prefix='Embarked')
dummies_title = pd.get_dummies(dataset['Title'], prefix='Title')
# Add the new features to the dataframe via concating
temp_dataset = pd.concat([dataset, dummies_person, dummies_embarked, dummies_title], axis=1)
# Drop the original categorical feature columns
temp_dataset = temp_dataset.drop(['Person', 'Embarked', 'Title'], axis=1)
# Drop one of each of the dummy variables because its value is implied
# by the other dummy variable columns
# E.g. if Person_male = 0, and Person_female = 0, then the person
# is a child
dataset = temp_dataset.drop(['Person_child', 'Embarked_C', 'Title_Master'], axis=1)
dataset.head()
###Output
_____no_output_____
###Markdown
Тепер наші дані готові для тренування та тестування моделі. Building a Logistic Regression ModelОтже, ми проаналізували наш набір даних, очистили дані від пропусків, виконали трансформацію ознак, щоб отримати з них корисну нам інформацію. Все це було підготовкой, необхідною для того, щоб на основі наших даних навчити модель, здатну визначити в кого з пасажирів Титаніка більше шансів вижити. Split the data and the labelsПо-перше, давайте розділимо набір даних на тренувальний та тестовий набори:
###Code
ds_train = dataset.loc[dataset['PassengerId'].isin(train['PassengerId'].values)]
ds_train = ds_train.drop(['PassengerId'], axis=1)
print('Довжина тренувального набору даних: {:.0f}'.format(ds_train.shape[0]))
ds_test = dataset.loc[dataset['PassengerId'].isin(test_X['PassengerId'].values)]
ds_test = ds_test.drop(['PassengerId'], axis=1)
print('Довжина тестового набору даних: {:.0f}'.format(ds_test.shape[0]))
###Output
Довжина тренувального набору даних: 891
Довжина тестового набору даних: 418
###Markdown
Тепер, давайте розділимо навчальний та тестовий нобори на цільову ознаку та характеризуючі ознаки:
###Code
X_train = ds_train.drop(['Survived'], axis=1)
y_train = ds_train['Survived']
X_test = ds_test.drop(['Survived'], axis=1)
y_test = ds_test['Survived']
###Output
_____no_output_____
###Markdown
Rescaling valuesНаявність ознак, що мають різні масштаби (min та max значення), може спричинити проблеми в деяких моделях машинного навчання, оскільки багато моделей базуються на концепції евклідової відстані. Це означає, що особливості з більшими масштабами мали б більший вплив на рішення, ніж ті, що мають менші значення.Ми можемо виправити цю ситуацію, змінивши незалежні змінні. Це можна зробити за допомогою функції масштабування.
###Code
scaler = StandardScaler()
# scaler = MinMaxScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Create and fit a modelТепер ми можемо створити та навчити модель на тренувальному наборі даних.
###Code
model = LogisticRegression()
model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Validate modelОтримати точність моделі на тестовому наборі:
###Code
predictions = model.predict(X_test)
TN = confusion_matrix(y_test, predictions)[0][0]
FP = confusion_matrix(y_test, predictions)[0][1]
FN = confusion_matrix(y_test, predictions)[1][0]
TP = confusion_matrix(y_test, predictions)[1][1]
total = TN + FP + FN + TP
ACC = (TP + TN) / float(total)
print ("Ця модель має точність {}% на тестовому наборі даних".format(round(ACC * 100, 2)))
print ("TN {}. Це {}% від загальної кількості прогнозів".format(TN, round((TN) / float(total) * 100, 2)))
print ("TP {}. Це {}% від загальної кількості прогнозів".format(TP, round((TP) / float(total) * 100, 2)))
print ("FN {}. Це {}% від загальної кількості прогнозів".format(FN, round((FN) / float(total) * 100, 2)))
print ("FP {}. Це {}% від загальної кількості прогнозів".format(FP, round((FP) / float(total) * 100, 2)))
###Output
Ця модель має точність 92.11% на тестовому наборі даних
TN 242. Це 57.89% від загальної кількості прогнозів
TP 143. Це 34.21% від загальної кількості прогнозів
FN 9. Це 2.15% від загальної кількості прогнозів
FP 24. Це 5.74% від загальної кількості прогнозів
###Markdown
Evaluation: Cross ValidationНаведений вище метод навчання та тестування моделі насправді не показує, наскільки добре працює модель. Створюючи модель, ми хочемо, щоб вона могла узагальнити (малий bias) і мати подібні точності в тестових наборах (низька дисперсія(variance)). Однак нам бракує навчання та тестування даних.Кращий метод для перевірки моделі - використовувати перехресну перевірку (cross-validation). Під час перехресної перевірки ми розбиваємо набір на різні навчальні та тестувальні набори, і ми використовуємо ці набори для тренування та тестування моделі кілька разів (кілька тренувальних і тестувальних ітерацій), постійно змінюючи тренувальні та тестувальні набори на кожній ітерації.Варіанти включають Leave One Out Cross Validation, KFold Cross Validation тощо.KFold Cross Validation - це поширений метод, коли навчальний набір ділиться на k рівних вибірок. Тоді з цих k вибірок одна вибірка використовується для тестування, а решта k-1 вибірок використовуються для навчання. Цей процес повторюється k разів, і кожен раз для тестування використовується інша вибірка. Це призводить до того, що кожен зразок тестується один раз. В кінці цього ми отримуємо k точностей для моделі, з яких ми можемо отримати середню точність та стандартне відхилення точності. Чим вище середня точність, тим менше bias. Чим нижче стандартне відхилення, тим менша дисперсія (variance). Це краще відображає справжню ефективність моделі на навчальному наборі.
###Code
y = dataset['Survived']
X = dataset.drop(['Survived'], axis=1)
X = X.drop(['PassengerId'], axis=1)
model = LogisticRegression()
scaler = StandardScaler()
kfold = KFold(n_splits=10)
kfold.get_n_splits(X)
accuracy = np.zeros(10)
np_idx = 0
for train_idx, test_idx in kfold.split(X):
X_train, X_test = X.values[train_idx], X.values[test_idx]
y_train, y_test = y.values[train_idx], y.values[test_idx]
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
model.fit(X_train, y_train)
predictions = model.predict(X_test)
TN = confusion_matrix(y_test, predictions)[0][0]
FP = confusion_matrix(y_test, predictions)[0][1]
FN = confusion_matrix(y_test, predictions)[1][0]
TP = confusion_matrix(y_test, predictions)[1][1]
total = TN + FP + FN + TP
ACC = (TP + TN) / float(total)
accuracy[np_idx] = ACC*100
np_idx += 1
print ("Fold {}: Accuracy: {}%".format(np_idx, round(ACC,3)))
print ("Average Score: {}%({}%)".format(round(np.mean(accuracy),3),round(np.std(accuracy),3)))
###Output
Fold 1: Accuracy: 0.802%
Fold 2: Accuracy: 0.817%
Fold 3: Accuracy: 0.847%
Fold 4: Accuracy: 0.771%
Fold 5: Accuracy: 0.779%
Fold 6: Accuracy: 0.817%
Fold 7: Accuracy: 0.847%
Fold 8: Accuracy: 0.962%
Fold 9: Accuracy: 0.947%
Fold 10: Accuracy: 0.985%
Average Score: 85.724%(7.439%)
###Markdown
Створимо функцію, яка візуалізує точність моделей, які ми будуємо. Вона будує безперервну лінію середніх значень (mean) балів вибраного оцінювача (estimator) для двох наборів даних, і кольорову смугу навколо середньої лінії, тобто інтервал (mean - стандартне відхилення (standard deviation), mean + стандартне відхилення).`plot_learning_curve()` використовує функцію `sklearn.learning_curve.learning_curve()`, яка обчислює тренувальні та тестові бали при перехресній перевірці(ross-validated) для різних розмірів навчальних наборів. Оцінки усереднюються для всіх k ітерацій в залежності від розміру тренувальної вибірки.
###Code
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None, n_jobs=1,\
train_sizes=np.linspace(.1, 1.0, 5), scoring='accuracy'):
plt.figure(figsize=(10,6))
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel(scoring)
train_sizes, train_scores, test_scores = learning_curve(estimator, X, y, cv=cv, scoring=scoring, n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,\
train_scores_mean + train_scores_std, alpha=0.1, \
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,\
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g", label="Cross-validation score")
plt.legend(loc="best")
return plt
###Output
_____no_output_____
###Markdown
Накреслимо "криву навчання" класифікатора як на навчальному, так і на тестовому наборі даних.
###Code
X_scaled = scaler.fit_transform(X)
plot_learning_curve(model,'Logistic Regression', X_scaled, y, cv=10)
###Output
_____no_output_____
###Markdown
Optimize Model: Grid SearchGrid searching - це добре відомий метод підбору гіперпараметрів, які оптимізують вашу модель.Grid search просто будує декілька моделей із усіма вказаними комбінаціями параметрів та запускає перехресну перевірку, щоб повернути набір параметрів, що мали найвищий бал cv на тестовій вибірці.
###Code
model = LogisticRegression()
scaler = StandardScaler()
kfold = KFold(n_splits=10)
kfold.get_n_splits(X)
best_model = model
best_params = {}
best_accuracy = 0
best_std = 0
for C in [0.001,0.01,0.05,0.1,0.5,1,5,10, 100]:
for solver in ['newton-cg','lbfgs','liblinear','sag']:
model = LogisticRegression(C=C, solver=solver)
accuracy = np.zeros(10)
np_idx = 0
for train_idx, test_idx in kfold.split(X):
X_train, X_test = X.values[train_idx], X.values[test_idx]
y_train, y_test = y.values[train_idx], y.values[test_idx]
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
model.fit(X_train, y_train)
predictions = model.predict(X_test)
TN = confusion_matrix(y_test, predictions)[0][0]
FP = confusion_matrix(y_test, predictions)[0][1]
FN = confusion_matrix(y_test, predictions)[1][0]
TP = confusion_matrix(y_test, predictions)[1][1]
total = TN + FP + FN + TP
ACC = (TP + TN) / float(total)
accuracy[np_idx] = ACC*100
np_idx += 1
if np.mean(accuracy) > best_accuracy:
best_model = model
best_params = {'C':C, 'solver':solver}
best_accuracy = np.mean(accuracy)
best_std = np.std(accuracy)
print (best_params)
print ("Найкращій бал: {}%({}%)".format(round(best_accuracy, 2),round(best_std, 2)))
print ("\nОптимальна модель логістичної регресії використовує C={}, та {} solver, та має бал під час перехресної перевірки на тестовій виборці {}% зі стандартним відхиленням {}%".format(best_params['C'],best_params['solver'],round(best_accuracy, 2),round(best_std, 2)))
###Output
{'C': 0.05, 'solver': 'newton-cg'}
Найкращій бал: 86.49%(7.88%)
Оптимальна модель логістичної регресії використовує C=0.05, та newton-cg solver, та має бал під час перехресної перевірки на тестовій виборці 86.49% зі стандартним відхиленням 7.88%
|
Assignments/A1-SamplingTimeSeries.ipynb | ###Markdown
- Name- PID- COGS118C - Assignment 1 This notebook has [30 + 3 bonus] points in total The number of points for each question is denoted by []. Make sure you've answered all the questions and that the point total add up. --- Lab 1 - Time Series, Sampling, and Epoched Analysis (ERPs)In this lab, we will cover the first stages of signal processing: sampling data. This includes digitization and sampling theorem. We will generate and plot some signals. Then, we'll perform our first kind of neural signal analysis: event-related potentials.Key concepts:- visualizing time-series- digitization/quantization- sampling- (more) indexing arrays- epoching- event-related potentials (ERPs): noise and averaging**Answers for questions requiring written responses can be entered in the cell immediately below the question, so that when you write your response, it doesn't screw up the formatting of the question.** Analog signalsReal world signals are continuous in time and amplitude (up to quantum-level limits, anyway). These are referred to as **"analog"** signals (Google it). Soundwaves that we produce when we speak or when we play a violin, for example, are analog signals. Equivalently, there are "analog devices" that produce, receive, and/or operate on analog signals. These often involve "analog" circuits. [1] Q1:[1] 1.1: Give 3 examples of analog devices. **Response for 1.1:** joystick, clock, keyboard Digital signalsPeople used to analyze signals using analog circuits. This is pretty hardcore, and requires extensive hands-on knowledge about circuitry. Once you want to analyze the signal on a "digital" computer, however, you have to "digitize" the signal. This requires an **"analog-to-digital converter"** or ADC for short. ---A tangent (without delving too much into how a computer works): all modern computers operate with binary transistors, which use a combination of "bits" to represent all other types of information. In the analog world, there are an infinite number of number between 0 and 1, so there is a limit to how accurately we can represent small decimals (or really big numbers). Python uses [floating point](https://0.30000000000000004.com/). Everything you see on your screen, at the lowest level, is converted into a numerical **binary** representation, even strings (see [ASCII](https://www.cs.cmu.edu/~pattis/15-1XX/common/handouts/ascii.html) table, for example).---Anyway, to digitize an analog signal, you have to discretely sample, both in value (voltage, brightness, etc) and in time. The former is usually called **digitization or quantization**, while **sampling** usually refers to the latter. It's like drawing a grid over your continuous signals and interpolating its values only at where the grid crosses. Let's get into itWithout further ado: let's load up some EEG signals and explore. But first, make the necessary python module imports.
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import io # this submodule let's us load the signal we want
%matplotlib inline
# scipy loads .mat file into a dictionary
# the details are not crucial, we just have to unpack them into python variables
EEG_data = io.loadmat('data/EEG_exp.mat', squeeze_me = True)
# print all the variables that exist in the dictionary
print(EEG_data.keys())
# this contains the EEG data
EEG = EEG_data['EEG']
# this contains the sampling rate, in Hz (or samples/second)
fs = EEG_data['fs']
# let's plot the signal
plt.figure(figsize=(15,3))
plt.plot(EEG)
# ALWAYS label your plot axes in this course (and ever)
plt.xlabel('Sample Number')
plt.ylabel('Voltage (uV)')
# now let's zoom in to see more detail
plt.figure(figsize=(15,3))
plt.plot(EEG, '.') # '.' means plot the data points as individual dots without linking them
plt.xlim([0,1000]) # this limits the x-axis shown
plt.xlabel('Sample Number')
plt.ylabel('Voltage (uV)')
###Output
dict_keys(['__header__', '__version__', '__globals__', 'EEG', 'fs', 'trial_info'])
###Markdown
[3] Q2: DigitizationAs you can see above, the signal we loaded is already a digitally sampled time series (a little over 70,000 samples), represented by discrete points in the second plot. To study the effect of quantization, let's simulate what would happen if we further quantized the signal, with a (prehistoric) 4-bit ADC.[1] 2.1: How many possible values can a 4-bit ADC represent? Remember, this means that the ADC has 4 binary 'bits' that it can use, thus giving you a total of how many levels? Compute this number in code and store that value in the variable `num_levels` below.[1] 2.2: Let's say our ADC has a total range between -32uV to 32uV. What is the voltage resolution of our ADC then? In other words, what is the finest voltage difference our ADC can distinguish between two samples? Compute this number in code and store that value in the variable `delta_v` below.[1] 2.3: Run the next two cells, they should produce a graph where the orange trace looks very quantized (kind of square). This is not good, because then we cannot distinguish small fluctuations in our signals, which, as we will see later in the course, are very important. **Re-run** the next two cells, but experiment with different values for `num_bits`. Just based on visual inspection of the plot, what is the minimum number of bits that you would want your ADC to have in this case, assuming the blue trace is a faithful representation of your signal? There's no one right answer, but justify your response. **Response for 2.3:** 16 bits or as many bits as possible. The more bits, the smaller the voltage difference that can be distinguished. As the number of bits increased, the difference between ground truth and quantized signal approaches zero.
###Code
num_bits = 16
min_v, max_v = -32,32
num_levels = 2**num_bits
delta_v = (max_v-min_v)/num_levels
# create the quantization vector, these are the new possible values that your signal can take
ADC_levels = np.arange(min_v,max_v,delta_v)+delta_v/2
# quantize the EEG signal with our crappy ADC with the function np.digitize
# note that we have to scale the redigitized signal to its original units
EEG_quant = np.digitize(EEG,bins=ADC_levels)*delta_v+min_v
plt.figure(figsize=(15,4))
plt.plot(EEG, label='Original EEG')
plt.plot(EEG_quant, label='Quantized EEG', alpha=0.8)
plt.xlim([0,1000]); plt.ylim([-15, 15]);
plt.legend()
plt.xlabel('Sample Number')
plt.ylabel('Voltage (uV)')
###Output
_____no_output_____
###Markdown
--- Sample Number vs. TimeNotice that in all the plots above, the x-axis is "sample number", which simply correponds to the position each value is in the array `EEG`. We want to create a corresponding time vector, which marks at what clock time each value is sampled at. Sometimes your data will include a time vector. But for the sake of this exercise, you are asked to create the time vector based on the information/variables you have. [6] Q3: Sampling in Time[1] 3.1: Given the sampling rate, what is the sampling **period**? In other words, how much time elapses between each consecutive sample? Compute this number as a function of `fs` and store it in the variable `dt` below.[1] 3.2: How long in total is this signal, in absolute time? Compute and store this in the variable `T_exp` below.[1] 3.3: Construct the corresponding time vector for the EEG data, assuming that the first sample came at t=0 and evenly spaced samples at `dt`. Store that in the variable `t_EEG` below. Hint: check out the function `np.arange()`.[2] 3.4: Re-plot the signal as a line chart, but with the x-axis as time (using the time vector you created above), and zoom into the first 1 second of the data. **Take note to label your plots carefully, with units!**[1] 3.5: To simulate **downsampling** in time, plot every **10th** value of the EEG data by indexing the array (check Google/StackExchange for how to do this). Remember, this applies both to the time vector and your EEG data. **Make sure to label your data and display the legend as Q2 above.**[BONUS: 1] 3.6: Sometimes it's useful to downsample your signal in time to conserve memory. As we did above, by taking every 10th value in our data, we essentially reduce the data size 10-fold. However, this is **NOT** the entirely right way to downsample your data. What issue do we introduce when we simply do that? (Hint: the answer can be as short as one word, and Google is your friend here.) **Response for 3.6:** Aliasing
###Code
dt = 1/fs
T_exp = len(EEG)*dt
t_EEG = np.arange(0, T_exp, dt)
# Plotting the signal and its downsampled version
plt.figure(figsize=(15,3))
plt.plot(t_EEG, EEG, label='EEG')
plt.plot(t_EEG[::10], EEG[:-1:10], '.-', label='Downsampled')
plt.xlim([0,1]); plt.ylim([-15, 15]);
plt.xlabel('Time (s)')
plt.ylabel('Voltage (uV)')
###Output
_____no_output_____
###Markdown
Event-Related AnalysisThe above data actually comes from an event-style EEG experiment. The participant is shown visual stimuli at regular intervals, aimed to trigger a reliable brain response for each type of stimuli (cat vs. dog pics, for example). This is a very common type of study design in neuroscience (and psychology). In this case, we will need to know when a stimulus was presented, and what type of stimulus it was. This information is stored in the variable `trial_info`, where the **first column has the stimulus onset time (in seconds), and the second column has the type of stimulus shown (1,2, or 3).** These are often extra streams of data sent through the "trigger channel" by the stimulus-presenting computer directly to the recording equipment, in order to synchronize with the EEG data.
###Code
trial_info = EEG_data['trial_info']
# print the first 10 events
print(trial_info[:10,:])
###Output
[[ 1. 3. ]
[ 3.375 3. ]
[ 5.87 1. ]
[ 8.183 2. ]
[10.419 1. ]
[12.588 1. ]
[14.87 2. ]
[17.086 2. ]
[19.164 3. ]
[21.237 2. ]]
###Markdown
--- Process for Analyzing Event-Related DataThese types of experiments follow a pretty standard analysis process. 1. Import and pre-process your data (already done; we'll skip the pre-processing for now)2. Given the stimulus presentation timestamps (first column of `trial_info` above), find the corresponding indices in your EEG data by matching to the `t_EEG` time vector.3. Cut out an **epoch** (window of data) around the stimulus presentation time, which usually includes: - pre-stimulus baseline (~0.5 seconds before stimulus presentation) - stimulus presentation (t = 0) - stimulus-driven response (or event-related response, 0-1 second after stimulus presentation)4. Baseline subtraction: subtract each epoch by its mean pre-stimulus value to account for any slow drifts over time.5. Group epochs based on stimulus type, and average epochs of the same type.6. Plot the average response (s). [4] Q4: Step 2 - Find Matching Timestamps in EEG DataGiven the event times in `trial_info`, which we will assume to be the stimulus onset time for this experiment, we have to find the corresponding timestamp in the EEG data. Note that the timestamps may not always match exactly, as they could have different sampling rates. In those cases, you will have to settle for finding the **closest** timestamps. Currently, however, life was made easy for us by virtue of the fact that the EEG data (and timestamps) and the stimulus event timestamps are synchronously sampled at 1000Hz.In this case, we can directly convert the event timestamp into an integer index, since we know the sampling frequency and starting time. [1] 4.1: If the EEG timestamp starts at `t=0`, which is indexed by `i=0`, and is sampled at `fs=1000`, at which index will the EEG timestamp be equal to **3.050 seconds**? Compute and store this in the variable `trial_index` below. Note that to index an array, the number has to be an integer, which I've converted for you. (You will notice that the value is *a LITTLE* off. That's a precision issue and We can ignore that for now.)[3] 4.2: Following this logic, write a function that will find the corresponding index in the EEG data/timestamp for every event timestamp. Return that as an array of integers (`my_arr.astype(int)` will convert an array to all integers). You may use a for loop, list comprehension, or a simple (one-line) array calculation for this. Confirm that the timestamps match what you expect by printing the first 10 events (I've done this for you).
###Code
trial_index = (3.050*fs)
print(t_EEG[np.array(trial_index).astype(int)]) # access the value at the corresponding index
def compute_EEG_indices(event_timestamps, fs):
return np.multiply(event_timestamps, fs).astype(int)
# call your function to compute the corresponding indices
EEG_indices = compute_EEG_indices(trial_info[:, 0], fs)
# print your solution and the actual event times to compare, they should be identical
print(t_EEG[EEG_indices[:10]])
print(trial_info[:10,0])
EEG[:10]
###Output
_____no_output_____
###Markdown
[6] Q5: Step 3 - Grabbing EpochsNow that we have the corresponding indices in the EEG data, we know exactly where the **onset** of each stimulus is. The next thing we have to do is to grab a chunk of data surrounding the onset time, which we define to be `t=0` for every trial. That means you will want to grab a little bit of data before and after that time. [3] 5.1: Write a function that will, given an array of `data`, the sampling rate `fs`, and an `index`, grab a window of data surrounding that index, defined by `len_pre` and `len_post` in **seconds**. Note that `len_pre` should be negative to reflect that it's before the stimulus onset time. I've started this function for you below. Again, there are multiple ways to accomplish this, but the simplest solution can accomplish this in a single line.[1] 5.2: Use this function to grab an epoch for the **10th trial** (remember that's stored in `EEG_indices` already), with a pre-stimulus window of 0.5 seconds and a post-stimulus window of 1 second.[1] 5.3: Create a time vector `t_epoch` that corresponds to the timestamps for that epoch, relative to the stimulus onset time as zero. In other words, this time vector should start at `len_pre` and end at `len_post`, and has the same sampling frequency.[1] 5.4: Plot the epoch of data you grabbed. Note that the x-axis should be time. **Label your axes!**
###Code
def grab_epoch(data, index, fs, len_pre, len_post):
return data[int(index+len_pre*fs):int(index+len_post*fs)]
# _FILL_IN_YOUR_CODE_HERE
len_pre = -0.5 #second
len_post = 1 #second
epoch = grab_epoch(EEG, EEG_indices[9], fs, len_pre, len_post)
print(epoch[:5])
t_epoch = np.arange(len_pre, len_post, 1/fs)
# plotting
plt.figure(figsize=(6,4))
plt.plot(t_epoch, epoch, label='Epoch: 10th Trial')
plt.xlabel('Time (s)')
plt.ylabel('Voltage (uV)')
###Output
[-8.62576252 -8.63914269 -7.59542043 -7.38226366 -6.82182491]
###Markdown
[4] Q6: Step 4 - Grab All & Baseline Correct (Bonus)[2] 6.1: If you grab an epoch for every trial and store that in a 2D numpy matrix, what should the dimensions of that matrix be, i.e., how many rows and how many columns? What do those numbers correspond to? Hint: you should organize your data such that there are more columns than rows in this particular case.[2] 6.2: Write a function that grabs **all** epochs (every trial) and store that in a 2D numpy matrix. There are a few ways to do this, but they will likely all use `grab_epoch()` somehow. Confirm that it has the same shape that you expect from above. Hint: you can append your epochs indefinitely to a python list using `list.append()`, and use `np.array()` to automatically convert that into a 2D matrix.[BONUS: 2] 6.3: Baseline all your epochs by subtracting the pre-stimulus epoch mean (-0.5 to 0 seconds) of each epoch from itself. **Response for 6.1:** (300, 1500) = (trials, samples in epoch)
###Code
def get_all_epochs(data, indices, fs, len_pre, len_post):
# get all epochs
all_epochs = [grab_epoch(data, idx, fs, len_pre, len_post) for idx in indices]
all_epochs = np.array(all_epochs)
# baselining (if you want, it can also be a separate function)
trial_means = np.mean(all_epochs[:, 0:int(abs(len_pre)*fs)], axis=1)
trial_means = trial_means.reshape(len(trial_means), 1)
all_epochs = all_epochs - trial_means
return all_epochs
epoched_EEG = get_all_epochs(EEG, EEG_indices, fs, len_pre, len_post)
print(epoched_EEG.shape)
# plot all the epochs and average
plt.plot(t_epoch, epoched_EEG.T, '-k', alpha=0.01)
plt.plot(t_epoch, np.mean(epoched_EEG,axis=0), label='Average Response')
plt.xlabel('Time (s)')
plt.ylabel('Voltage (uV)')
plt.legend()
###Output
(300, 1500)
###Markdown
[6] Q7: Step 5 & 6 - Group Based on Trial TypeIn the plot above, I simply averaged over all the epochs to produce the average response (blue). However, as you will recall, there are several different types of trials (second column in `trial_info`). We should group epochs of the same trial type, and average over those. [5] 7.1: You have full flexibility for this part, with the only requirement being to produce a plot with 3 average responses corresponding to the 3 different trial types. Remember to label your plot axes and include a legend for which trace corresponds to which stimulus type. You will be evaluated on 3 things: whether you have successfully separated the epochs into their respective groupings, how well your code is commented to explain what you're doing, and whether you plot is correct and labeled. Since I have not given you a template for making a function, it may be useful to plan out what you want to do beforehand by writing pseudo code (i.e., plain English). Decide what strategy you will take (loops vs. list comprehension vs. others), and whether you want to separate the averaging and the plotting. You already know all the concepts required to tackle this problem (indexing, averaging, plotting), the challenge is putting them together. [1] 7.2: Briefly describe your results, e.g., what's similar and what's different between the conditions? Which stimulus produced the largest response.---Your plot should look something like: **Response for 7.2:** ANSWER 14 & -14
###Code
def split_trials(eeg, onsets, trial_types, fs, len_pre, len_post):
''' Split trials by type.
Parameters
----------
eeg : np.array or list-like
contains eeg data in uV.
onsets : np.array or list-like
contains onsets of all trials in seconds
trial_type : np.array or list-like
specifies trial type, one per onset, either 1, 2, or 3.
fs : int
sampling frequency
len_pre : int or float
length prior to cue onset (seconds)
len_post : int or float
length after cue onsets (seconds)
Returns
-------
epochs_a : np.array
epochs for trials == 1
epochs_b : np.array
epochs for trials == 2
epochs_c : np.array
epochs for trials == 3
'''
# Split onsets by trial type, then convert seconds to indices
idx_a = onsets[np.where(trial_types == 1)[0]] * fs
idx_a = idx_a.astype(int)
idx_b = onsets[np.where(trial_types == 2)[0]] * fs
idx_b = idx_b.astype(int)
idx_c = onsets[np.where(trial_types == 3)[0]] * fs
idx_c = idx_c.astype(int)
# Get epochs for each trial type
epochs_a = get_all_epochs(eeg, idx_a, fs, len_pre, len_post)
epochs_b = get_all_epochs(eeg, idx_b, fs, len_pre, len_post)
epochs_c = get_all_epochs(eeg, idx_c, fs, len_pre, len_post)
return epochs_a, epochs_b, epochs_c
epochs_a, epochs_b, epochs_c = split_trials(EEG, trial_info[:, 0], trial_info[:, 1],
fs, len_pre, len_post)
# plot
t_epoch = np.arange(len_pre, len_post, 1/fs)
plt.figure(figsize=(16,9))
plt.plot(t_epoch, np.mean(epochs_a, axis=0), label="Average Response: Type 1")
plt.plot(t_epoch, np.mean(epochs_b, axis=0), label="Average Response: Type 2")
plt.plot(t_epoch, np.mean(epochs_c, axis=0), label="Average Response: Type 3")
plt.xlabel('Time (s)')
plt.ylabel('Voltage (uV)')
#plt.xlim([-10, 600])
plt.legend()
###Output
_____no_output_____
###Markdown
- Name- PID- COGS118C - Assignment 1 This notebook has [30 + 3 bonus] points in total The number of points for each question is denoted by []. Make sure you've answered all the questions and that the point total add up. --- Lab 1 - Time Series, Sampling, and Epoched Analysis (ERPs)In this lab, we will cover the first stages of signal processing: sampling data. This includes digitization and sampling theorem. We will generate and plot some signals. Then, we'll perform our first kind of neural signal analysis: event-related potentials.Key concepts:- visualizing time-series- digitization/quantization- sampling- (more) indexing arrays- epoching- event-related potentials (ERPs): noise and averaging**Answers for questions requiring written responses can be entered in the cell immediately below the question, so that when you write your response, it doesn't screw up the formatting of the question.** Analog signalsReal world signals are continuous in time and amplitude (up to quantum-level limits, anyway). These are referred to as **"analog"** signals (Google it). Soundwaves that we produce when we speak or when we play a violin, for example, are analog signals. Equivalently, there are "analog devices" that produce, receive, and/or operate on analog signals. These often involve "analog" circuits. [1] Q1:[1] 1.1: Give 3 examples of analog devices. **Response for 1.1:** Digital signalsPeople used to analyze signals using analog circuits. This is pretty hardcore, and requires extensive hands-on knowledge about circuitry. Once you want to analyze the signal on a "digital" computer, however, you have to "digitize" the signal. This requires an **"analog-to-digital converter"** or ADC for short. ---A tangent (without delving too much into how a computer works): all modern computers operate with binary transistors, which use a combination of "bits" to represent all other types of information. In the analog world, there are an infinite number of number between 0 and 1, so there is a limit to how accurately we can represent small decimals (or really big numbers). Python uses [floating point](https://0.30000000000000004.com/). Everything you see on your screen, at the lowest level, is converted into a numerical **binary** representation, even strings (see [ASCII](https://www.cs.cmu.edu/~pattis/15-1XX/common/handouts/ascii.html) table, for example).---Anyway, to digitize an analog signal, you have to discretely sample, both in value (voltage, brightness, etc) and in time. The former is usually called **digitization or quantization**, while **sampling** usually refers to the latter. It's like drawing a grid over your continuous signals and interpolating its values only at where the grid crosses. Let's get into itWithout further ado: let's load up some EEG signals and explore. But first, make the necessary python module imports.
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import io # this submodule let's us load the signal we want
%matplotlib inline
# scipy loads .mat file into a dictionary
# the details are not crucial, we just have to unpack them into python variables
EEG_data = io.loadmat('data/EEG_exp.mat', squeeze_me = True)
# print all the variables that exist in the dictionary
print(EEG_data.keys())
# this contains the EEG data
EEG = EEG_data['EEG']
# this contains the sampling rate, in Hz (or samples/second)
fs = EEG_data['fs']
# let's plot the signal
plt.figure(figsize=(15,3))
plt.plot(EEG)
# ALWAYS label your plot axes in this course (and ever)
plt.xlabel('Sample Number')
plt.ylabel('Voltage (uV)')
# now let's zoom in to see more detail
plt.figure(figsize=(15,3))
plt.plot(EEG, '.') # '.' means plot the data points as individual dots without linking them
plt.xlim([0,1000]) # this limits the x-axis shown
plt.xlabel('Sample Number')
plt.ylabel('Voltage (uV)')
###Output
dict_keys(['__header__', '__version__', '__globals__', 'EEG', 'fs', 'trial_info'])
###Markdown
[3] Q2: DigitizationAs you can see above, the signal we loaded is already a digitally sampled time series (a little over 70,000 samples), represented by discrete points in the second plot. To study the effect of quantization, let's simulate what would happen if we further quantized the signal, with a (prehistoric) 4-bit ADC.[1] 2.1: How many possible values can a 4-bit ADC represent? Remember, this means that the ADC has 4 binary 'bits' that it can use, thus giving you a total of how many levels? Compute this number in code and store that value in the variable `num_levels` below.[1] 2.2: Let's say our ADC has a total range between -32uV to 32uV. What is the voltage resolution of our ADC then? In other words, what is the finest voltage difference our ADC can distinguish between two samples? Compute this number in code and store that value in the variable `delta_v` below.[1] 2.3: Run the next two cells, they should produce a graph where the orange trace looks very quantized (kind of square). This is not good, because then we cannot distinguish small fluctuations in our signals, which, as we will see later in the course, are very important. **Re-run** the next two cells, but experiment with different values for `num_bits`. Just based on visual inspection of the plot, what is the minimum number of bits that you would want your ADC to have in this case, assuming the blue trace is a faithful representation of your signal? There's no one right answer, but justify your response. **Response for 2.3:**
###Code
num_bits = 4
min_v, max_v = -32,32
num_levels = 2**num_bits# _FILL_IN_YOUR_CODE_HERE
print(f'With {num_bits}bits there are {num_levels} levels')
delta_v = (max_v-min_v)/num_levels# _FILL_IN_YOUR_CODE_HERE
print(f'Voltage resolution is of {delta_v}uV')
# create the quantization vector, these are the new possible values that your signal can take
ADC_levels = np.arange(min_v,max_v,delta_v)+delta_v/2
# quantize the EEG signal with our crappy ADC with the function np.digitize
# note that we have to scale the redigitized signal to its original units
EEG_quant = np.digitize(EEG,bins=ADC_levels)*delta_v+min_v
plt.figure(figsize=(15,4))
plt.plot(EEG, label='Original EEG')
plt.plot(EEG_quant, label='Quantized EEG', alpha=0.8)
plt.xlim([0,1000]); plt.ylim([-15, 15]);
plt.legend()
plt.xlabel('Sample Number')
plt.ylabel('Voltage (uV)')
num_bits = 8
min_v, max_v = -32,32
num_levels = 2**num_bits# _FILL_IN_YOUR_CODE_HERE
print(f'With {num_bits}bits there are {num_levels} levels')
delta_v = (max_v-min_v)/num_levels# _FILL_IN_YOUR_CODE_HERE
print(f'Voltage resolution is of {delta_v}uV')
# create the quantization vector, these are the new possible values that your signal can take
ADC_levels = np.arange(min_v,max_v,delta_v)+delta_v/2
# quantize the EEG signal with our crappy ADC with the function np.digitize
# note that we have to scale the redigitized signal to its original units
EEG_quant = np.digitize(EEG,bins=ADC_levels)*delta_v+min_v
plt.figure(figsize=(15,4))
plt.plot(EEG, label='Original EEG')
plt.plot(EEG_quant, label='Quantized EEG', alpha=0.8)
plt.xlim([0,1000]); plt.ylim([-15, 15]);
plt.legend()
plt.xlabel('Sample Number')
plt.ylabel('Voltage (uV)')
###Output
_____no_output_____
###Markdown
--- Sample Number vs. TimeNotice that in all the plots above, the x-axis is "sample number", which simply correponds to the position each value is in the array `EEG`. We want to create a corresponding time vector, which marks at what clock time each value is sampled at. Sometimes your data will include a time vector. But for the sake of this exercise, you are asked to create the time vector based on the information/variables you have. [6] Q3: Sampling in Time[1] 3.1: Given the sampling rate, what is the sampling **period**? In other words, how much time elapses between each consecutive sample? Compute this number as a function of `fs` and store it in the variable `dt` below.[1] 3.2: How long in total is this signal, in absolute time? Compute and store this in the variable `T_exp` below.[1] 3.3: Construct the corresponding time vector for the EEG data, assuming that the first sample came at t=0 and evenly spaced samples at `dt`. Store that in the variable `t_EEG` below. Hint: check out the function `np.arange()`.[2] 3.4: Re-plot the signal as a line chart, but with the x-axis as time (using the time vector you created above), and zoom into the first 1 second of the data. **Take note to label your plots carefully, with units!**[1] 3.5: To simulate **downsampling** in time, plot every **10th** value of the EEG data by indexing the array (check Google/StackExchange for how to do this). Remember, this applies both to the time vector and your EEG data. **Make sure to label your data and display the legend as Q2 above.**[BONUS: 1] 3.6: Sometimes it's useful to downsample your signal in time to conserve memory. As we did above, by taking every 10th value in our data, we essentially reduce the data size 10-fold. However, this is **NOT** the entirely right way to downsample your data. What issue do we introduce when we simply do that? (Hint: the answer can be as short as one word, and Google is your friend here.) **Response for 3.6:**
###Code
dt = 1/fs# _FILL_IN_YOUR_CODE_HERE
T_exp = len(EEG) * dt# _FILL_IN_YOUR_CODE_HERE
t_EEG = np.arange(0, T_exp, dt) # _FILL_IN_YOUR_CODE_HERE
# Plotting the signal and its downsampled version
plt.figure(figsize=(15,3))
plt.plot(t_EEG, EEG, label='Original EEG')
plt.plot(t_EEG[::10], EEG[::10], '.-', label='Downsampled EEG')
plt.xlim([0,1]); plt.ylim([-15, 15]);
plt.legend()
plt.xlabel('Time (s)')
plt.ylabel('Voltage (uV)')# _FILL_IN_YOUR_CODE_HERE
###Output
_____no_output_____
###Markdown
Event-Related AnalysisThe above data actually comes from an event-style EEG experiment. The participant is shown visual stimuli at regular intervals, aimed to trigger a reliable brain response for each type of stimuli (cat vs. dog pics, for example). This is a very common type of study design in neuroscience (and psychology). In this case, we will need to know when a stimulus was presented, and what type of stimulus it was. This information is stored in the variable `trial_info`, where the **first column has the stimulus onset time (in seconds), and the second column has the type of stimulus shown (1,2, or 3).** These are often extra streams of data sent through the "trigger channel" by the stimulus-presenting computer directly to the recording equipment, in order to synchronize with the EEG data.
###Code
trial_info = EEG_data['trial_info']
# print the first 10 events
print(trial_info[:10,:])
trial_info.shape
###Output
_____no_output_____
###Markdown
--- Process for Analyzing Event-Related DataThese types of experiments follow a pretty standard analysis process. 1. Import and pre-process your data (already done; we'll skip the pre-processing for now)2. Given the stimulus presentation timestamps (first column of `trial_info` above), find the corresponding indices in your EEG data by matching to the `t_EEG` time vector.3. Cut out an **epoch** (window of data) around the stimulus presentation time, which usually includes: - pre-stimulus baseline (~0.5 seconds before stimulus presentation) - stimulus presentation (t = 0) - stimulus-driven response (or event-related response, 0-1 second after stimulus presentation)4. Baseline subtraction: subtract each epoch by its mean pre-stimulus value to account for any slow drifts over time.5. Group epochs based on stimulus type, and average epochs of the same type.6. Plot the average response (s). [4] Q4: Step 2 - Find Matching Timestamps in EEG DataGiven the event times in `trial_info`, which we will assume to be the stimulus onset time for this experiment, we have to find the corresponding timestamp in the EEG data. Note that the timestamps may not always match exactly, as they could have different sampling rates. In those cases, you will have to settle for finding the **closest** timestamps. Currently, however, life was made easy for us by virtue of the fact that the EEG data (and timestamps) and the stimulus event timestamps are synchronously sampled at 1000Hz.In this case, we can directly convert the event timestamp into an integer index, since we know the sampling frequency and starting time. [1] 4.1: If the EEG timestamp starts at `t=0`, which is indexed by `i=0`, and is sampled at `fs=1000`, at which index will the EEG timestamp be equal to **3.050 seconds**? Compute and store this in the variable `trial_index` below. Note that to index an array, the number has to be an integer, which I've converted for you. (You will notice that the value is *a LITTLE* off. That's a precision issue and We can ignore that for now.)[3] 4.2: Following this logic, write a function that will find the corresponding index in the EEG data/timestamp for every event timestamp. Return that as an array of integers (`my_arr.astype(int)` will convert an array to all integers). You may use a for loop, list comprehension, or a simple (one-line) array calculation for this. Confirm that the timestamps match what you expect by printing the first 10 events (I've done this for you).
###Code
fs = 1000#units/sec
t = 3.050#sec
trial_index = t*(fs)#_FILL_IN_YOUR_CODE_HERE
print(t_EEG[np.array(trial_index).astype(int)]) # access the value at the corresponding index
def compute_EEG_indices(event_timestamps, fs):
index_array = np.array([i[0] for i in (event_timestamps*fs).astype(int)])# _FILL_IN_YOUR_CODE_HERE
return index_array
# call your function to compute the corresponding indices
EEG_indices = compute_EEG_indices(trial_info,fs)
# print your solution and the actual event times to compare, they should be identical
print(t_EEG[EEG_indices[:10]])
print(trial_info[:10,0])
#do the same using a for loop
###Output
_____no_output_____
###Markdown
[6] Q5: Step 3 - Grabbing EpochsNow that we have the corresponding indices in the EEG data, we know exactly where the **onset** of each stimulus is. The next thing we have to do is to grab a chunk of data surrounding the onset time, which we define to be `t=0` for every trial. That means you will want to grab a little bit of data before and after that time. [3] 5.1: Write a function that will, given an array of `data`, the sampling rate `fs`, and an `index`, grab a window of data surrounding that index, defined by `len_pre` and `len_post` in **seconds**. Note that `len_pre` should be negative to reflect that it's before the stimulus onset time. I've started this function for you below. Again, there are multiple ways to accomplish this, but the simplest solution can accomplish this in a single line.[1] 5.2: Use this function to grab an epoch for the **10th trial** (remember that's stored in `EEG_indices` already), with a pre-stimulus window of 0.5 seconds and a post-stimulus window of 1 second.[1] 5.3: Create a time vector `t_epoch` that corresponds to the timestamps for that epoch, relative to the stimulus onset time as zero. In other words, this time vector should start at `len_pre` and end at `len_post`, and has the same sampling frequency.[1] 5.4: Plot the epoch of data you grabbed. Note that the x-axis should be time. **Label your axes!**
###Code
def grab_epoch(data, index, fs, len_pre, len_post):
# _FILL_IN_YOUR_CODE_HERE
return data[(index+(int(len_pre*fs))) : (index+(len_post*fs))]
# _FILL_IN_YOUR_CODE_HERE
len_pre = -0.5 #second
len_post = 1 #second
epoch = grab_epoch(EEG, EEG_indices[9], 1000, len_pre, len_post)
print(epoch[:5])
t_epoch = np.arange(len_pre,len_post,dt)# _FILL_IN_YOUR_CODE_HERE
# plotting
plt.figure(figsize=(6,4))
# _FILL_IN_YOUR_CODE_HERE
plt.plot(t_epoch, epoch)
plt.xlabel('Time Value')
plt.ylabel('Voltage (uV)')
plt.title('10th trial')
epoch.shape
###Output
_____no_output_____
###Markdown
[4] Q6: Step 4 - Grab All & Baseline Correct (Bonus)[2] 6.1: If you grab an epoch for every trial and store that in a 2D numpy matrix, what should the dimensions of that matrix be, i.e., how many rows and how many columns? What do those numbers correspond to? Hint: you should organize your data such that there are more columns than rows in this particular case.[2] 6.2: Write a function that grabs **all** epochs (every trial) and store that in a 2D numpy matrix. There are a few ways to do this, but they will likely all use `grab_epoch()` somehow. Confirm that it has the same shape that you expect from above. Hint: you can append your epochs indefinitely to a python list using `list.append()`, and use `np.array()` to automatically convert that into a 2D matrix.[BONUS: 2] 6.3: Baseline all your epochs by subtracting the pre-stimulus epoch mean (-0.5 to 0 seconds) of each epoch from itself. **Response for 6.1:**
###Code
epoch.shape
trial_info.shape
print(len(EEG_indices))
print(len(epoch))
def get_baseline(epoch):
#get baseline by substracting epoch mean
return [itself - np.mean(epoch[:int(0.5 * fs)].astype(int)) for itself in epoch]
#note that 0.5*fs corrresponds to the first 500 points up to time=0
def get_all_epochs(data, indices, fs, len_pre, len_post):
#create list of epochs
list_epochs = []
#loop through indices in epoch
for i in indices:
#add epochs to list by using prev create function
#convert to baseline
list_epochs.append(get_baseline(grab_epoch(data, i, fs, len_pre, len_post)))
#convert to array
return np.array(list_epochs)
epoched_EEG = get_all_epochs(EEG, EEG_indices, fs, len_pre, len_post)
print(epoched_EEG.shape)
# plot all the epochs and average
plt.figure(figsize=(16,9))
plt.plot(t_epoch, epoched_EEG.T, '-k', alpha=0.01)
plt.plot(t_epoch, np.mean(epoched_EEG,axis=0), label='Average Response')
plt.xlabel('Time (s)')
plt.ylabel('Voltage (uV)')
plt.legend()
def baseline_epoch(epoch):
epoch_mean = np.mean(epoch[:500])
for i in range(len(epoch[:500])):
epoch[i] = epoch[i] - epoch_mean
baselined_epoch = epoch
return baselined_epoch
def get_all_epochs(data, indices, len_pre, len_post, fs=1000):
epoch_list = []
for i in range(len(indices)):
temp_epoch = grab_epoch(data, indices[i], len_pre, len_post)
temp_epoch = baseline_epoch(temp_epoch)
epoch_list.append(temp_epoch)
all_epochs = np.array(epoch_list)
return all_epochs
epoched_EEG = get_all_epochs(EEG, EEG_indices, len_pre, len_post)
print(epoched_EEG.shape)
# plot all the epochs and average
plt.plot(t_epoch, epoched_EEG.T, '-k', alpha=0.01)
plt.plot(t_epoch, np.mean(epoched_EEG,axis=0), label='Average Response')
plt.xlabel('Time (s)')
plt.ylabel('Voltage (uV)')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
[6] Q7: Step 5 & 6 - Group Based on Trial TypeIn the plot above, I simply averaged over all the epochs to produce the average response (blue). However, as you will recall, there are several different types of trials (second column in `trial_info`). We should group epochs of the same trial type, and average over those. [5] 7.1: You have full flexibility for this part, with the only requirement being to produce a plot with 3 average responses corresponding to the 3 different trial types. Remember to label your plot axes and include a legend for which trace corresponds to which stimulus type. You will be evaluated on 3 things: whether you have successfully separated the epochs into their respective groupings, how well your code is commented to explain what you're doing, and whether you plot is correct and labeled. Since I have not given you a template for making a function, it may be useful to plan out what you want to do beforehand by writing pseudo code (i.e., plain English). Decide what strategy you will take (loops vs. list comprehension vs. others), and whether you want to separate the averaging and the plotting. You already know all the concepts required to tackle this problem (indexing, averaging, plotting), the challenge is putting them together. [1] 7.2: Briefly describe your results, e.g., what's similar and what's different between the conditions? Which stimulus produced the largest response.---Your plot should look something like: **Response for 7.2:** ANSWER 14 & -14
###Code
#Step 1: Groupby by type of stimuli
# create function for simplicity
type1, type2, type3 = [], [], []
for i in trial_info:
#creating a list of all type 1 data
if i[1] == 1.:
type1.append(i)
#creating a list of all type 2 data
elif i[1] == 2.:
type2.append(i)
#creating a list of all type 3 data
else:
type3.append(i)
#Step 2 get indices
type1_indices = compute_EEG_indices(np.array(type1), fs)
type2_indices = compute_EEG_indices(np.array(type2), fs)
type3_indices = compute_EEG_indices(np.array(type3), fs)
#get epochs from indices for all 3 trials
type1_epochs = get_all_epochs(EEG, type1_indices, fs, len_pre, len_post)
type2_epochs = get_all_epochs(EEG, type2_indices, fs, len_pre, len_post)
type3_epochs = get_all_epochs(EEG, type3_indices, fs, len_pre, len_post)
#check
#type1_epochs
#type2_epochs
#type3_epochs
# multiply by 1000 to convert to millisecond
plt.plot(t_epoch *1000, np.mean(type1_epochs,axis=0), label = 'Type 1')
plt.plot(t_epoch*1000, np.mean(type2_epochs,axis=0), label = 'Type 2')
plt.plot(t_epoch *1000, np.mean(type3_epochs,axis=0), label = 'Type 3')
# get lines like in example graph
plt.axhline(0, color='black')
plt.axvline(0, color='black')
# zoom in on specific parts of graph were interested in
plt.xlim([-50,700]); plt.ylim([-5, 10]);
# labels of x and y
plt.xlabel('Latency(ms)')
plt.ylabel('Potential(uV)')
#get legend
plt.legend()
###Output
_____no_output_____
###Markdown
- Name- PID- COGS118C - Assignment 1 This notebook has [30 + 3 bonus] points in total The number of points for each question is denoted by []. Make sure you've answered all the questions and that the point total add up. --- Lab 1 - Time Series, Sampling, and Epoched Analysis (ERPs)In this lab, we will cover the first stages of signal processing: sampling data. This includes digitization and sampling theorem. We will generate and plot some signals. Then, we'll perform our first kind of neural signal analysis: event-related potentials.Key concepts:- visualizing time-series- digitization/quantization- sampling- (more) indexing arrays- epoching- event-related potentials (ERPs): noise and averaging**Answers for questions requiring written responses can be entered in the cell immediately below the question, so that when you write your response, it doesn't screw up the formatting of the question.** Analog signalsReal world signals are continuous in time and amplitude (up to quantum-level limits, anyway). These are referred to as **"analog"** signals (Google it). Soundwaves that we produce when we speak or when we play a violin, for example, are analog signals. Equivalently, there are "analog devices" that produce, receive, and/or operate on analog signals. These often involve "analog" circuits. [1] Q1:[1] 1.1: Give 3 examples of analog devices. **Response for 1.1:** Digital signalsPeople used to analyze signals using analog circuits. This is pretty hardcore, and requires extensive hands-on knowledge about circuitry. Once you want to analyze the signal on a "digital" computer, however, you have to "digitize" the signal. This requires an **"analog-to-digital converter"** or ADC for short. ---A tangent (without delving too much into how a computer works): all modern computers operate with binary transistors, which use a combination of "bits" to represent all other types of information. In the analog world, there are an infinite number of number between 0 and 1, so there is a limit to how accurately we can represent small decimals (or really big numbers). Python uses [floating point](https://0.30000000000000004.com/). Everything you see on your screen, at the lowest level, is converted into a numerical **binary** representation, even strings (see [ASCII](https://www.cs.cmu.edu/~pattis/15-1XX/common/handouts/ascii.html) table, for example).---Anyway, to digitize an analog signal, you have to discretely sample, both in value (voltage, brightness, etc) and in time. The former is usually called **digitization or quantization**, while **sampling** usually refers to the latter. It's like drawing a grid over your continuous signals and interpolating its values only at where the grid crosses. Let's get into itWithout further ado: let's load up some EEG signals and explore. But first, make the necessary python module imports.
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import io # this submodule let's us load the signal we want
%matplotlib inline
# scipy loads .mat file into a dictionary
# the details are not crucial, we just have to unpack them into python variables
EEG_data = io.loadmat('data/EEG_exp.mat', squeeze_me = True)
# print all the variables that exist in the dictionary
print(EEG_data.keys())
# this contains the EEG data
EEG = EEG_data['EEG']
# this contains the sampling rate, in Hz (or samples/second)
fs = EEG_data['fs']
# let's plot the signal
plt.figure(figsize=(15,3))
plt.plot(EEG)
# ALWAYS label your plot axes in this course (and ever)
plt.xlabel('Sample Number')
plt.ylabel('Voltage (uV)')
# now let's zoom in to see more detail
plt.figure(figsize=(15,3))
plt.plot(EEG, '.') # '.' means plot the data points as individual dots without linking them
plt.xlim([0,1000]) # this limits the x-axis shown
plt.xlabel('Sample Number')
plt.ylabel('Voltage (uV)')
###Output
dict_keys(['__header__', '__version__', '__globals__', 'EEG', 'fs', 'trial_info'])
###Markdown
[3] Q2: DigitizationAs you can see above, the signal we loaded is already a digitally sampled time series (a little over 70,000 samples), represented by discrete points in the second plot. To study the effect of quantization, let's simulate what would happen if we further quantized the signal, with a (prehistoric) 4-bit ADC.[1] 2.1: How many possible values can a 4-bit ADC represent? Remember, this means that the ADC has 4 binary 'bits' that it can use, thus giving you a total of how many levels? Compute this number in code and store that value in the variable `num_levels` below.[1] 2.2: Let's say our ADC has a total range between -32uV to 32uV. What is the voltage resolution of our ADC then? In other words, what is the finest voltage difference our ADC can distinguish between two samples? Compute this number in code and store that value in the variable `delta_v` below.[1] 2.3: Run the next two cells, they should produce a graph where the orange trace looks very quantized (kind of square). This is not good, because then we cannot distinguish small fluctuations in our signals, which, as we will see later in the course, are very important. **Re-run** the next two cells, but experiment with different values for `num_bits`. Just based on visual inspection of the plot, what is the minimum number of bits that you would want your ADC to have in this case, assuming the blue trace is a faithful representation of your signal? There's no one right answer, but justify your response. **Response for 2.3:**
###Code
num_bits = 4
min_v, max_v = -32,32
num_levels = 2**num_bits
delta_v = (abs(min_v) + abs(max_v)) /num_levels
# create the quantization vector, these are the new possible values that your signal can take
ADC_levels = np.arange(min_v,max_v,delta_v)+delta_v/2
# quantize the EEG signal with our crappy ADC with the function np.digitize
# note that we have to scale the redigitized signal to its original units
EEG_quant = np.digitize(EEG,bins=ADC_levels)*delta_v+min_v
plt.figure(figsize=(15,4))
plt.plot(EEG, label='Original EEG')
plt.plot(EEG_quant, label='Quantized EEG', alpha=0.8)
plt.xlim([0,1000]); plt.ylim([-15, 15]);
plt.legend()
plt.xlabel('Sample Number')
plt.ylabel('Voltage (uV)')
###Output
_____no_output_____
###Markdown
--- Sample Number vs. TimeNotice that in all the plots above, the x-axis is "sample number", which simply correponds to the position each value is in the array `EEG`. We want to create a corresponding time vector, which marks at what clock time each value is sampled at. Sometimes your data will include a time vector. But for the sake of this exercise, you are asked to create the time vector based on the information/variables you have. [6] Q3: Sampling in Time[1] 3.1: Given the sampling rate, what is the sampling **period**? In other words, how much time elapses between each consecutive sample? Compute this number as a function of `fs` and store it in the variable `dt` below.[1] 3.2: How long in total is this signal, in absolute time? Compute and store this in the variable `T_exp` below.[1] 3.3: Construct the corresponding time vector for the EEG data, assuming that the first sample came at t=0 and evenly spaced samples at `dt`. Store that in the variable `t_EEG` below. Hint: check out the function `np.arange()`.[2] 3.4: Re-plot the signal as a line chart, but with the x-axis as time (using the time vector you created above), and zoom into the first 1 second of the data. **Take note to label your plots carefully, with units!**[1] 3.5: To simulate **downsampling** in time, plot every **10th** value of the EEG data by indexing the array (check Google/StackExchange for how to do this). Remember, this applies both to the time vector and your EEG data. **Make sure to label your data and display the legend as Q2 above.**[BONUS: 1] 3.6: Sometimes it's useful to downsample your signal in time to conserve memory. As we did above, by taking every 10th value in our data, we essentially reduce the data size 10-fold. However, this is **NOT** the entirely right way to downsample your data. What issue do we introduce when we simply do that? (Hint: the answer can be as short as one word, and Google is your friend here.) **Response for 3.6:**
###Code
dt = 1/fs
T_exp = EEG_quant.shape[0] * dt
t_EEG = np.arange(0 , T_exp , dt)
# Plotting the signal and its downsampled version
plt.figure(figsize=(15,3))
plt.plot(t_EEG, EEG, label='EEG')
plt.plot(t_EEG[::10] , EEG[::10]
, '.-', label= 'Downsampled')
plt.xlim([0,1]); plt.ylim([-15, 15]);
plt.legend()
# Issue with downsampling by time : Aliasing? Varying / different signals become indistinguishable.
###Output
_____no_output_____
###Markdown
Event-Related AnalysisThe above data actually comes from an event-style EEG experiment. The participant is shown visual stimuli at regular intervals, aimed to trigger a reliable brain response for each type of stimuli (cat vs. dog pics, for example). This is a very common type of study design in neuroscience (and psychology). In this case, we will need to know when a stimulus was presented, and what type of stimulus it was. This information is stored in the variable `trial_info`, where the **first column has the stimulus onset time (in seconds), and the second column has the type of stimulus shown (1,2, or 3).** These are often extra streams of data sent through the "trigger channel" by the stimulus-presenting computer directly to the recording equipment, in order to synchronize with the EEG data.
###Code
trial_info = EEG_data['trial_info']
# print the first 10 events
print(trial_info[:10,:])
###Output
[[ 1. 3. ]
[ 3.375 3. ]
[ 5.87 1. ]
[ 8.183 2. ]
[10.419 1. ]
[12.588 1. ]
[14.87 2. ]
[17.086 2. ]
[19.164 3. ]
[21.237 2. ]]
###Markdown
--- Process for Analyzing Event-Related DataThese types of experiments follow a pretty standard analysis process. 1. Import and pre-process your data (already done; we'll skip the pre-processing for now)2. Given the stimulus presentation timestamps (first column of `trial_info` above), find the corresponding indices in your EEG data by matching to the `t_EEG` time vector.3. Cut out an **epoch** (window of data) around the stimulus presentation time, which usually includes: - pre-stimulus baseline (~0.5 seconds before stimulus presentation) - stimulus presentation (t = 0) - stimulus-driven response (or event-related response, 0-1 second after stimulus presentation)4. Baseline subtraction: subtract each epoch by its mean pre-stimulus value to account for any slow drifts over time.5. Group epochs based on stimulus type, and average epochs of the same type.6. Plot the average response (s). [4] Q4: Step 2 - Find Matching Timestamps in EEG DataGiven the event times in `trial_info`, which we will assume to be the stimulus onset time for this experiment, we have to find the corresponding timestamp in the EEG data. Note that the timestamps may not always match exactly, as they could have different sampling rates. In those cases, you will have to settle for finding the **closest** timestamps. Currently, however, life was made easy for us by virtue of the fact that the EEG data (and timestamps) and the stimulus event timestamps are synchronously sampled at 1000Hz.In this case, we can directly convert the event timestamp into an integer index, since we know the sampling frequency and starting time. [1] 4.1: If the EEG timestamp starts at `t=0`, which is indexed by `i=0`, and is sampled at `fs=1000`, at which index will the EEG timestamp be equal to **3.050 seconds**? Compute and store this in the variable `trial_index` below. Note that to index an array, the number has to be an integer, which I've converted for you. (You will notice that the value is *a LITTLE* off. That's a precision issue and We can ignore that for now.)[3] 4.2: Following this logic, write a function that will find the corresponding index in the EEG data/timestamp for every event timestamp. Return that as an array of integers (`my_arr.astype(int)` will convert an array to all integers). You may use a for loop, list comprehension, or a simple (one-line) array calculation for this. Confirm that the timestamps match what you expect by printing the first 10 events (I've done this for you).
###Code
trial_index = 3.050*1000
print(t_EEG[np.array(trial_index).astype(int)]) # access the value at the corresponding index
def compute_EEG_indices(event_timestamps, fs):
trial_indices = np.array(event_timestamps*fs).astype(int)
return trial_indices
# call your function to compute the corresponding indices
EEG_indices = compute_EEG_indices(trial_info[:10,0], fs)
# print your solution and the actual event times to compare, they should be identical
print(t_EEG[EEG_indices[:10]])
print(trial_info[:10,0])
###Output
[ 1. 3.375 5.87 8.183 10.419 12.588 14.87 17.086 19.164 21.237]
[ 1. 3.375 5.87 8.183 10.419 12.588 14.87 17.086 19.164 21.237]
###Markdown
[6] Q5: Step 3 - Grabbing EpochsNow that we have the corresponding indices in the EEG data, we know exactly where the **onset** of each stimulus is. The next thing we have to do is to grab a chunk of data surrounding the onset time, which we define to be `t=0` for every trial. That means you will want to grab a little bit of data before and after that time. [3] 5.1: Write a function that will, given an array of `data`, the sampling rate `fs`, and an `index`, grab a window of data surrounding that index, defined by `len_pre` and `len_post` in **seconds**. Note that `len_pre` should be negative to reflect that it's before the stimulus onset time. I've started this function for you below. Again, there are multiple ways to accomplish this, but the simplest solution can accomplish this in a single line.[1] 5.2: Use this function to grab an epoch for the **10th trial** (remember that's stored in `EEG_indices` already), with a pre-stimulus window of 0.5 seconds and a post-stimulus window of 1 second.[1] 5.3: Create a time vector `t_epoch` that corresponds to the timestamps for that epoch, relative to the stimulus onset time as zero. In other words, this time vector should start at `len_pre` and end at `len_post`, and has the same sampling frequency.[1] 5.4: Plot the epoch of data you grabbed. Note that the x-axis should be time. **Label your axes!**
###Code
def grab_epoch(data, index, fs, len_pre, len_post):
epoch = data[index+(int(len_pre*fs)) : index+(int(len_post*fs))+1] # should +1 be added
return epoch
# _FILL_IN_YOUR_CODE_HERE
len_pre = -0.5 #second
len_post = 1 #second
epoch = grab_epoch(EEG , EEG_indices[9] ,fs , len_pre , len_post ) #index 9 coz 10th trial?
print(epoch[:5])
t_epoch = grab_epoch(t_EEG , EEG_indices[9] , fs , len_pre , len_post)
# plotting
plt.figure(figsize=(6,4))
plt.plot(t_epoch , epoch , label = 'One epoch')
plt.xlabel('Epoch timestamps')
plt.ylabel('Epoch values')
# _FILL_IN_YOUR_CODE_HERE
###Output
[-8.62576252 -8.63914269 -7.59542043 -7.38226366 -6.82182491]
###Markdown
[4] Q6: Step 4 - Grab All & Baseline Correct (Bonus)[2] 6.1: If you grab an epoch for every trial and store that in a 2D numpy matrix, what should the dimensions of that matrix be, i.e., how many rows and how many columns? What do those numbers correspond to? Hint: you should organize your data such that there are more columns than rows in this particular case.[2] 6.2: Write a function that grabs **all** epochs (every trial) and store that in a 2D numpy matrix. There are a few ways to do this, but they will likely all use `grab_epoch()` somehow. Confirm that it has the same shape that you expect from above. Hint: you can append your epochs indefinitely to a python list using `list.append()`, and use `np.array()` to automatically convert that into a 2D matrix.[BONUS: 2] 6.3: Baseline all your epochs by subtracting the pre-stimulus epoch mean (-0.5 to 0 seconds) of each epoch from itself. **Response for 6.1:**
###Code
def get_all_epochs(data, indices, fs, len_pre, len_post):
# _FILL_IN_YOUR_CODE_HERE
# get all epochs with baseline
all_epochs = np.array([grab_epoch(data , ind , fs , len_pre , len_post)- np.mean(grab_epoch(data, ind , fs , len_pre , 0)) for ind in indices])
# baselining (if you want, it can also be a separate function)
return all_epochs
epoched_EEG = get_all_epochs(EEG, EEG_indices, fs, len_pre, len_post)
print(epoched_EEG.shape)
# plot all the epochs and average
plt.plot(t_epoch, epoched_EEG.T, '-k', alpha=0.01)
plt.plot(t_epoch, np.mean(epoched_EEG,axis=0), label='Average Response')
plt.xlabel('Time (s)')
plt.ylabel('Voltage (uV)')
plt.legend()
###Output
(10, 1501)
###Markdown
[6] Q7: Step 5 & 6 - Group Based on Trial TypeIn the plot above, I simply averaged over all the epochs to produce the average response (blue) at each timepoint. However, as you will recall, there are several different types of trials (second column in `trial_info`). We should group epochs of the same trial type, and average over those. [5] 7.1: You have full flexibility for this part, with the only requirement being to produce a plot with 3 average responses corresponding to the 3 different trial types. Remember to label your plot axes and include a legend for which trace corresponds to which stimulus type. You will be evaluated on 3 things: whether you have successfully separated the epochs into their respective groupings, how well your code is commented to explain what you're doing, and whether you plot is correct and labeled. Since I have not given you a template for making a function, it may be useful to plan out what you want to do beforehand by writing pseudo code (i.e., plain English). Decide what strategy you will take (loops vs. list comprehension vs. others), and whether you want to separate the averaging and the plotting. You already know all the concepts required to tackle this problem (indexing, averaging, plotting), the challenge is putting them together. [1] 7.2: Briefly describe your results, e.g., what's similar and what's different between the conditions? Which stimulus produced the largest response.---Your plot should look something like: **Response for 7.2:** ANSWER 14 & -14
###Code
# _FILL_IN_YOUR_CODE_HERE
#Seprate by trials and get corresponding index
trial_1_indices =[ int(i[0]*fs) for i in EEG_data['trial_info'] if i[1]==1]
trial_2_indices =[int(i[0]*fs) for i in EEG_data['trial_info'] if i[1]==2]
trial_3_indices =[int(i[0]*fs) for i in EEG_data['trial_info'] if i[1]==3]
#Get all epochs for each trial after baselining; then average it across trials
avg_trial_1_epochs = np.mean(get_all_epochs(EEG, trial_1_indices, fs, len_pre, len_post), 0)
avg_trial_2_epochs = np.mean(get_all_epochs(EEG, trial_2_indices, fs, len_pre, len_post), 0)
avg_trial_3_epochs = np.mean(get_all_epochs(EEG, trial_3_indices, fs, len_pre, len_post),0)
# Time axis based on the pre and post lengths
latency = np.linspace(-500 , 1000 , 1501)
#Plot averaged data
plt.plot(latency, avg_trial_1_epochs, '-m', alpha=1 , label = 'Trial 1')
plt.plot(latency, avg_trial_2_epochs, '-b', alpha=1 , label = 'Trial 2')
plt.plot(latency, avg_trial_3_epochs, '-r', alpha=1 , label = 'Trial 3')
plt.xlim([-10, 650])
plt.xlabel('Latency (ms)')
plt.ylabel('Potential (uV)')
plt.legend()
t = np.linspace(-500 , 1000 , 1501)
t.shape
###Output
_____no_output_____
###Markdown
- Name- PID- COGS118C - Assignment 1 This notebook has [30 + 3 bonus] points in total The number of points for each question is denoted by []. Make sure you've answered all the questions and that the point total add up. --- Lab 1 - Time Series, Sampling, and Epoched Analysis (ERPs)In this lab, we will cover the first stages of signal processing: sampling data. This includes digitization and sampling theorem. We will generate and plot some signals. Then, we'll perform our first kind of neural signal analysis: event-related potentials.Key concepts:- visualizing time-series- digitization/quantization- sampling- (more) indexing arrays- epoching- event-related potentials (ERPs): noise and averaging**Answers for questions requiring written responses can be entered in the cell immediately below the question, so that when you write your response, it doesn't screw up the formatting of the question.** Analog signalsReal world signals are continuous in time and amplitude (up to quantum-level limits, anyway). These are referred to as **"analog"** signals (Google it). Soundwaves that we produce when we speak or when we play a violin, for example, are analog signals. Equivalently, there are "analog devices" that produce, receive, and/or operate on analog signals. These often involve "analog" circuits. [1] Q1:[1] 1.1: Give 3 examples of analog devices. **Response for 1.1:** Digital signalsPeople used to analyze signals using analog circuits. This is pretty hardcore, and requires extensive hands-on knowledge about circuitry. Once you want to analyze the signal on a "digital" computer, however, you have to "digitize" the signal. This requires an **"analog-to-digital converter"** or ADC for short. ---A tangent (without delving too much into how a computer works): all modern computers operate with binary transistors, which use a combination of "bits" to represent all other types of information. In the analog world, there are an infinite number of number between 0 and 1, so there is a limit to how accurately we can represent small decimals (or really big numbers). Python uses [floating point](https://0.30000000000000004.com/). Everything you see on your screen, at the lowest level, is converted into a numerical **binary** representation, even strings (see [ASCII](https://www.cs.cmu.edu/~pattis/15-1XX/common/handouts/ascii.html) table, for example).---Anyway, to digitize an analog signal, you have to discretely sample, both in value (voltage, brightness, etc) and in time. The former is usually called **digitization or quantization**, while **sampling** usually refers to the latter. It's like drawing a grid over your continuous signals and interpolating its values only at where the grid crosses. Let's get into itWithout further ado: let's load up some EEG signals and explore. But first, make the necessary python module imports.
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import io # this submodule let's us load the signal we want
%matplotlib inline
# scipy loads .mat file into a dictionary
# the details are not crucial, we just have to unpack them into python variables
EEG_data = io.loadmat('data/EEG_exp.mat', squeeze_me = True)
# print all the variables that exist in the dictionary
print(EEG_data.keys())
# this contains the EEG data
EEG = EEG_data['EEG']
# this contains the sampling rate, in Hz (or samples/second)
fs = EEG_data['fs']
# let's plot the signal
plt.figure(figsize=(15,3))
plt.plot(EEG)
# ALWAYS label your plot axes in this course (and ever)
plt.xlabel('Sample Number')
plt.ylabel('Voltage (uV)')
# now let's zoom in to see more detail
plt.figure(figsize=(15,3))
plt.plot(EEG, '.') # '.' means plot the data points as individual dots without linking them
plt.xlim([0,1000]) # this limits the x-axis shown
plt.xlabel('Sample Number')
plt.ylabel('Voltage (uV)')
###Output
_____no_output_____
###Markdown
[3] Q2: DigitizationAs you can see above, the signal we loaded is already a digitally sampled time series (a little over 70,000 samples), represented by discrete points in the second plot. To study the effect of quantization, let's simulate what would happen if we further quantized the signal, with a (prehistoric) 4-bit ADC.[1] 2.1: How many possible values can a 4-bit ADC represent? Remember, this means that the ADC has 4 binary 'bits' that it can use, thus giving you a total of how many levels? Compute this number in code and store that value in the variable `num_levels` below.[1] 2.2: Let's say our ADC has a total range between -32uV to 32uV. What is the voltage resolution of our ADC then? In other words, what is the finest voltage difference our ADC can distinguish between two samples? Compute this number in code and store that value in the variable `delta_v` below.[1] 2.3: Run the next two cells, they should produce a graph where the orange trace looks very quantized (kind of square). This is not good, because then we cannot distinguish small fluctuations in our signals, which, as we will see later in the course, are very important. **Re-run** the next two cells, but experiment with different values for `num_bits`. Just based on visual inspection of the plot, what is the minimum number of bits that you would want your ADC to have in this case, assuming the blue trace is a faithful representation of your signal? There's no one right answer, but justify your response. **Response for 2.3:**
###Code
num_bits = 4
min_v, max_v = -32,32
num_levels = # _FILL_IN_YOUR_CODE_HERE
delta_v = # _FILL_IN_YOUR_CODE_HERE
# create the quantization vector, these are the new possible values that your signal can take
ADC_levels = np.arange(min_v,max_v,delta_v)+delta_v/2
# quantize the EEG signal with our crappy ADC with the function np.digitize
# note that we have to scale the redigitized signal to its original units
EEG_quant = np.digitize(EEG,bins=ADC_levels)*delta_v+min_v
plt.figure(figsize=(15,4))
plt.plot(EEG, label='Original EEG')
plt.plot(EEG_quant, label='Quantized EEG', alpha=0.8)
plt.xlim([0,1000]); plt.ylim([-15, 15]);
plt.legend()
plt.xlabel('Sample Number')
plt.ylabel('Voltage (uV)')
###Output
_____no_output_____
###Markdown
--- Sample Number vs. TimeNotice that in all the plots above, the x-axis is "sample number", which simply correponds to the position each value is in the array `EEG`. We want to create a corresponding time vector, which marks at what clock time each value is sampled at. Sometimes your data will include a time vector. But for the sake of this exercise, you are asked to create the time vector based on the information/variables you have. [6] Q3: Sampling in Time[1] 3.1: Given the sampling rate, what is the sampling **period**? In other words, how much time elapses between each consecutive sample? Compute this number as a function of `fs` and store it in the variable `dt` below.[1] 3.2: How long in total is this signal, in absolute time? Compute and store this in the variable `T_exp` below.[1] 3.3: Construct the corresponding time vector for the EEG data, assuming that the first sample came at t=0 and evenly spaced samples at `dt`. Store that in the variable `t_EEG` below. Hint: check out the function `np.arange()`.[2] 3.4: Re-plot the signal as a line chart, but with the x-axis as time (using the time vector you created above), and zoom into the first 1 second of the data. **Take note to label your plots carefully, with units!**[1] 3.5: To simulate **downsampling** in time, plot every **10th** value of the EEG data by indexing the array (check Google/StackExchange for how to do this). Remember, this applies both to the time vector and your EEG data. **Make sure to label your data and display the legend as Q2 above.**[BONUS: 1] 3.6: Sometimes it's useful to downsample your signal in time to conserve memory. As we did above, by taking every 10th value in our data, we essentially reduce the data size 10-fold. However, this is **NOT** the entirely right way to downsample your data. What issue do we introduce when we simply do that? (Hint: the answer can be as short as one word, and Google is your friend here.) **Response for 3.6:**
###Code
dt = # _FILL_IN_YOUR_CODE_HERE
T_exp = # _FILL_IN_YOUR_CODE_HERE
t_EEG = # _FILL_IN_YOUR_CODE_HERE
# Plotting the signal and its downsampled version
plt.figure(figsize=(15,3))
plt.plot(t_EEG, EEG, label='EEG')
plt.plot(_FILL_IN_YOUR_CODE_HERE, _FILL_IN_YOUR_CODE_HERE, '.-', label=_FILL_IN_YOUR_CODE_HERE)
plt.xlim([0,1]); plt.ylim([-15, 15]);
plt.legend()
# _FILL_IN_YOUR_CODE_HERE
###Output
_____no_output_____
###Markdown
Event-Related AnalysisThe above data actually comes from an event-style EEG experiment. The participant is shown visual stimuli at regular intervals, aimed to trigger a reliable brain response for each type of stimuli (cat vs. dog pics, for example). This is a very common type of study design in neuroscience (and psychology). In this case, we will need to know when a stimulus was presented, and what type of stimulus it was. This information is stored in the variable `trial_info`, where the **first column has the stimulus onset time (in seconds), and the second column has the type of stimulus shown (1,2, or 3).** These are often extra streams of data sent through the "trigger channel" by the stimulus-presenting computer directly to the recording equipment, in order to synchronize with the EEG data.
###Code
trial_info = EEG_data['trial_info']
# print the first 10 events
print(trial_info[:10,:])
###Output
_____no_output_____
###Markdown
--- Process for Analyzing Event-Related DataThese types of experiments follow a pretty standard analysis process. 1. Import and pre-process your data (already done; we'll skip the pre-processing for now)2. Given the stimulus presentation timestamps (first column of `trial_info` above), find the corresponding indices in your EEG data by matching to the `t_EEG` time vector.3. Cut out an **epoch** (window of data) around the stimulus presentation time, which usually includes: - pre-stimulus baseline (~0.5 seconds before stimulus presentation) - stimulus presentation (t = 0) - stimulus-driven response (or event-related response, 0-1 second after stimulus presentation)4. Baseline subtraction: subtract each epoch by its mean pre-stimulus value to account for any slow drifts over time.5. Group epochs based on stimulus type, and average epochs of the same type.6. Plot the average response (s). [4] Q4: Step 2 - Find Matching Timestamps in EEG DataGiven the event times in `trial_info`, which we will assume to be the stimulus onset time for this experiment, we have to find the corresponding timestamp in the EEG data. Note that the timestamps may not always match exactly, as they could have different sampling rates. In those cases, you will have to settle for finding the **closest** timestamps. Currently, however, life was made easy for us by virtue of the fact that the EEG data (and timestamps) and the stimulus event timestamps are synchronously sampled at 1000Hz.In this case, we can directly convert the event timestamp into an integer index, since we know the sampling frequency and starting time. [1] 4.1: If the EEG timestamp starts at `t=0`, which is indexed by `i=0`, and is sampled at `fs=1000`, at which index will the EEG timestamp be equal to **3.050 seconds**? Compute and store this in the variable `trial_index` below. Note that to index an array, the number has to be an integer, which I've converted for you. (You will notice that the value is *a LITTLE* off. That's a precision issue and We can ignore that for now.)[3] 4.2: Following this logic, write a function that will find the corresponding index in the EEG data/timestamp for every event timestamp. Return that as an array of integers (`my_arr.astype(int)` will convert an array to all integers). You may use a for loop, list comprehension, or a simple (one-line) array calculation for this. Confirm that the timestamps match what you expect by printing the first 10 events (I've done this for you).
###Code
trial_index = #_FILL_IN_YOUR_CODE_HERE
print(t_EEG[np.array(trial_index).astype(int)]) # access the value at the corresponding index
def compute_EEG_indices(event_timestamps, fs):
# _FILL_IN_YOUR_CODE_HERE
return
# call your function to compute the corresponding indices
EEG_indices = compute_EEG_indices()
# print your solution and the actual event times to compare, they should be identical
print(t_EEG[EEG_indices[:10]])
print(trial_info[:10,0])
###Output
_____no_output_____
###Markdown
[6] Q5: Step 3 - Grabbing EpochsNow that we have the corresponding indices in the EEG data, we know exactly where the **onset** of each stimulus is. The next thing we have to do is to grab a chunk of data surrounding the onset time, which we define to be `t=0` for every trial. That means you will want to grab a little bit of data before and after that time. [3] 5.1: Write a function that will, given an array of `data`, the sampling rate `fs`, and an `index`, grab a window of data surrounding that index, defined by `len_pre` and `len_post` in **seconds**. Note that `len_pre` should be negative to reflect that it's before the stimulus onset time. I've started this function for you below. Again, there are multiple ways to accomplish this, but the simplest solution can accomplish this in a single line.[1] 5.2: Use this function to grab an epoch for the **10th trial** (remember that's stored in `EEG_indices` already), with a pre-stimulus window of 0.5 seconds and a post-stimulus window of 1 second.[1] 5.3: Create a time vector `t_epoch` that corresponds to the timestamps for that epoch, relative to the stimulus onset time as zero. In other words, this time vector should start at `len_pre` and end at `len_post`, and has the same sampling frequency.[1] 5.4: Plot the epoch of data you grabbed. Note that the x-axis should be time. **Label your axes!**
###Code
def grab_epoch(data, index, fs, len_pre, len_post):
# _FILL_IN_YOUR_CODE_HERE
return
# _FILL_IN_YOUR_CODE_HERE
len_pre = -0.5 #second
len_post = 1 #second
epoch = grab_epoch(_FILL_IN_YOUR_CODE_HERE)
print(epoch[:5])
t_epoch = # _FILL_IN_YOUR_CODE_HERE
# plotting
plt.figure(figsize=(6,4))
# _FILL_IN_YOUR_CODE_HERE
###Output
_____no_output_____
###Markdown
[4] Q6: Step 4 - Grab All & Baseline Correct (Bonus)[2] 6.1: If you grab an epoch for every trial and store that in a 2D numpy matrix, what should the dimensions of that matrix be, i.e., how many rows and how many columns? What do those numbers correspond to? Hint: you should organize your data such that there are more columns than rows in this particular case.[2] 6.2: Write a function that grabs **all** epochs (every trial) and store that in a 2D numpy matrix. There are a few ways to do this, but they will likely all use `grab_epoch()` somehow. Confirm that it has the same shape that you expect from above. Hint: you can append your epochs indefinitely to a python list using `list.append()`, and use `np.array()` to automatically convert that into a 2D matrix.[BONUS: 2] 6.3: Baseline all your epochs by subtracting the pre-stimulus epoch mean (-0.5 to 0 seconds) of each epoch from itself. **Response for 6.1:**
###Code
def get_all_epochs(data, indices, fs, len_pre, len_post):
# _FILL_IN_YOUR_CODE_HERE
# get all epochs
# baselining (if you want, it can also be a separate function)
return all_epochs
epoched_EEG = get_all_epochs(EEG, EEG_indices, fs, len_pre, len_post)
print(epoched_EEG.shape)
# plot all the epochs and average
plt.plot(t_epoch, epoched_EEG.T, '-k', alpha=0.01)
plt.plot(t_epoch, np.mean(epoched_EEG,axis=0), label='Average Response')
plt.xlabel('Time (s)')
plt.ylabel('Voltage (uV)')
plt.legend()
###Output
_____no_output_____
###Markdown
[6] Q7: Step 5 & 6 - Group Based on Trial TypeIn the plot above, I simply averaged over all the epochs to produce the average response (blue). However, as you will recall, there are several different types of trials (second column in `trial_info`). We should group epochs of the same trial type, and average over those. [5] 7.1: You have full flexibility for this part, with the only requirement being to produce a plot with 3 average responses corresponding to the 3 different trial types. Remember to label your plot axes and include a legend for which trace corresponds to which stimulus type. You will be evaluated on 3 things: whether you have successfully separated the epochs into their respective groupings, how well your code is commented to explain what you're doing, and whether you plot is correct and labeled. Since I have not given you a template for making a function, it may be useful to plan out what you want to do beforehand by writing pseudo code (i.e., plain English). Decide what strategy you will take (loops vs. list comprehension vs. others), and whether you want to separate the averaging and the plotting. You already know all the concepts required to tackle this problem (indexing, averaging, plotting), the challenge is putting them together. [1] 7.2: Briefly describe your results, e.g., what's similar and what's different between the conditions? Which stimulus produced the largest response.---Your plot should look something like: **Response for 7.2:** ANSWER 14 & -14
###Code
# _FILL_IN_YOUR_CODE_HERE
###Output
_____no_output_____ |
notebooks/datasets/data/schools/school_cleaned.ipynb | ###Markdown
Clean Schools.csv1. Split address column2. Look at length - this displays discrepancies in addresses (looking for lengths 1, 3, 4)3. Create a city and state column - consistency with other data4. Create columns for schools categories - pk, k, elementary, middle, and high school5. Make score column an int
###Code
import pandas as pd
pd.set_option('display.max_colwidth', None)
pd.set_option('display.max_rows', 10)
df = pd.read_csv('files/scrape_schools/schools.csv')
print(df.shape)
df.head()
###Output
(58782, 10)
###Markdown
Clean Addresses
###Code
df['Address'] = df['Address'].str.replace('2100 Morse Road, Suite 4609, Columbus, OH 43229, Columbus, OH, 43211', '2100 Morse Road, Suite 4609, Columbus, OH, 43229')
df['Address'] = df['Address'].str.replace('2501 Syracuse Street, Denver, Colorado, 80238, Denver, CO, 80238', '2501 Syracuse Street, Denver, CO, 80238')
df['Address'] = df['Address'].str.replace('4450 West Eau Gallie Boulevard, Suite 180, Melbourne, FL 32934, Melbourne, FL, 32934', '4450 West Eau Gallie Boulevard, Suite 180, Melbourne, FL, 32934')
df['Address'] = df['Address'].str.replace('4530 MacArthur Blvd, NW, Washington, DC, Washington, DC, 20007', '4530 MacArthur Blvd NW, Washington, DC, 20007')
df['Address'] = df['Address'].str.replace('1075 New Scotland Road, Albany NY 12208, Albany, NY, 12208', '1075 New Scotland Road, Albany NY, 12208')
df['Address'] = df['Address'].str.replace('216 Shelburne Road Asheville, NC 28806, Asheville, NC, 28806', '216 Shelburne Road, Asheville, NC, 28806')
df['Address'] = df['Address'].str.replace('26450 RR 12 Dripping Springs, TX 78620, Austin, TX, 78736', '26450 RR 12, Dripping Springs, TX, 78620')
df['Address'] = df['Address'].str.replace('NE Stoneridge Loop, Prineville OR 97754, Bend, OR, 97702', 'NE Stoneridge Loop, Prineville, OR, 97754')
df['Address'] = df['Address'].str.replace('5225 - Seventy seven Center Dr, Charlotte NC 28217, Charlotte, NC, 28217', '5225 77 Center Dr, Charlotte, NC, 28217')
df['Address'] = df['Address'].str.replace('3375 W. 99th Street Cleveland, OH 44102, Cleveland, OH, 44111', '3375 W. 99th Street, Cleveland, OH, 44102')
df['Address'] = df['Address'].str.replace('21 Broadmoor Avenue Colorado Springs, CO 80906, Colorado Springs, CO, 80906', '21 Broadmoor Avenue, Colorado Springs, CO, 80906')
df['Address'] = df['Address'].str.replace('220 Stoneridge Drive Suite 403 Columbia, SC 29210 , Columbia, SC, 29210', '220 Stoneridge Drive, Suite 403, Columbia, SC, 29210')
df['Address'] = df['Address'].str.replace('2247 South Ridgewood South Daytona, Florida 32119, Daytona Beach, FL, 32117', '2247 South Ridgewood, South Daytona, FL, 32119')
df['Address'] = df['Address'].str.replace('7005 Woodbine Ave Sacramento, Ca. 95822, Fairfield, CA, 94534', '7005 Woodbine Ave, Sacramento, CA, 95822')
df['Address'] = df['Address'].str.replace('4424 Innovation Drive Fort Collins, Colorado 80525, Fort Collins, CO, 80525', '4424 Innovation Drive, Fort Collins, CO, 80525')
df['Address'] = df['Address'].str.replace('5300 El Camino Road Las Vegas, NV 89118, Henderson, NV, 89014', '5300 El Camino Road, Las Vegas, NV, 89118')
df['Address'] = df['Address'].str.replace('9039 Beach Blvd Jacksonville, FL 32216, Jacksonville, FL, 32207', '9039 Beach Blvd, Jacksonville, FL, 32216')
df['Address'] = df['Address'].str.replace('390 New Holland Pike, Lancaster PA 17601, Lancaster, PA, 17601', '390 New Holland Pike, Lancaster, PA, 17601')
df['Address'] = df['Address'].str.replace('4801. S. Sandhill Drive LV, NV 89121, Las Vegas, NV, 89123', '4801. S. Sandhill Drive, Las Vegas, NV, 89123')
df['Address'] = df['Address'].str.replace('2727 Stinson Blvd. NE Minneapolis, MN 55418, Minneapolis, MN, 55418', '2727 Stinson Blvd. NE, Minneapolis, MN, 55418')
df['Address'] = df['Address'].str.replace('3000 53rd St SW Naples, FL 34116, Naples, FL, 34116', '3000 53rd St SW, Naples, FL, 34116')
df['Address'] = df['Address'].str.replace('177 W Klein Rd. New Braunfels, TX 78130, New Braunfels, TX, 78130', '177 W Klein Rd., New Braunfels, TX, 78130')
df['Address'] = df['Address'].str.replace('500 Soraparu St. New Orleans, La 70130, New Orleans, LA, 70130', '500 Soraparu St., New Orleans, LA, 70130')
df['Address'] = df['Address'].str.replace('2162 Mountain Blvd, Oakland CA 94611, Oakland, CA, 94605', '2162 Mountain Blvd, Oakland, CA, 94611')
df['Address'] = df['Address'].str.replace('13231 N. 22nd St. Phoenix, AZ 85022, Phoenix, AZ, 85028', '13231 N. 22nd St., Phoenix, AZ, 85022')
df['Address'] = df['Address'].str.replace('14124 SE Mill St, Portland OR 97233, Portland, OR, 97266', '14124 SE Mill St, Portland, OR, 97233')
df['Address'] = df['Address'].str.replace('555 Double Eagle Ct. Suite 2000 Reno, NV 89521 , Reno, NV, 89521', '555 Double Eagle Ct., Suite 2000, Reno, NV, 89521')
df['Address'] = df['Address'].str.replace('3422 Rustin Ave Riverside, CA 92507, Riverside, CA, 92504', '3422 Rustin Ave, Riverside, CA, 92507')
df['Address'] = df['Address'].str.replace('2800 19th Stree NW Rochester, MN 55901, Rochester, MN, 55902', '2800 19th Stree NW, Rochester, MN, 55901')
df['Address'] = df['Address'].str.replace('9510 Carmel Mountain Road, San Diego CA 92129, San Diego, CA, 92129', '9510 Carmel Mountain Road, San Diego CA, 92129')
df['Address'] = df['Address'].str.replace('6540 Flanders Drive. San Diego, CA 92121, San Diego, CA, 92127', '6540 Flanders Drive., San Diego, CA, 92121')
df['Address'] = df['Address'].str.replace('725 Washington St. Santa Clara, Ca 95050, Santa Clara, CA, 95050', '725 Washington St., Santa Clara, CA, 95050')
df['Address'] = df['Address'].str.replace('6715 S Boe Lane Sioux Falls, SD 57108, Sioux Falls, SD, 57105', '6715 S Boe Lane, Sioux Falls, SD, 57108')
df['Address'] = df['Address'].str.replace('12611 N. Wilson St. Mead, WA 99021, Spokane, WA, 99218', '12611 N. Wilson St., Mead, WA, 99021')
df['Address'] = df['Address'].str.replace('1450 Newfield Avenue Stamford, CT 06905, Stamford, CT, 06905', '1450 Newfield Avenue, Stamford, CT, 06905')
df['Address'] = df['Address'].str.replace('2525 Gold Brook Dr Stockton, CA 95212, Stockton, CA, 95212', '2525 Gold Brook Dr, Stockton, CA, 95212')
df['Address'] = df['Address'].str.replace('1112 North G Street | Tacoma, WA 98403, Tacoma, WA, 98403', '1112 North G Street, Tacoma, WA, 98403')
df['Address'] = df['Address'].str.replace('1250 Erbes Rd. Thousand Oaks, CA 91362, Thousand Oaks, CA, 91302', '1250 Erbes Rd., Thousand Oaks, CA, 91362')
df['Address'] = df['Address'].str.replace('3201 N. Eastman Rd. Longview, TX 75605, Tyler, TX, 75799', '3201 N. Eastman Rd., Longview, TX, 75605')
df['Address'] = df['Address'].str.replace('St. Catherine of Siena School, 3460 Tennessee Street, Vallejo, CA, 94591', '3460 Tennessee Street, Vallejo, CA, 94591')
df['Address'] = df['Address'].str.replace('1650 Godfrey Ave. Wyoming,Mi 49509 , Wyoming, MI, 49509', '1650 Godfrey Ave., Wyoming, MI, 49509' )
df['Address'] = df['Address'].str.replace('3422 Rustin Ave Riverside, CA 92507', '3422 Rustin Ave, Riverside, CA, 92507')
df['Address'] = df['Address'].str.replace('San Martin De Porres Clinic: Kallumadanda Vinnie MD Mission, TX 78572', 'San Martin De Porres Clinic: Kallumadanda Vinnie MD, Mission, TX, 78572') # 33396
df['Address'] = df['Address'].str.replace('Rockwood Plastic Surgery Center: Gardner Glenn P MD Spokane, WA 99204', 'Rockwood Plastic Surgery Center: Gardner Glenn P MD, Spokane, WA, 99204' ) # 50841
df['Address'] = df['Address'].str.replace('2950 East 29th Street, Long Beach, CA', '2950 E 29th St, Long Beach, CA, 90806')
df['Address'] = df['Address'].str.replace('2585 Business Park Drive, Vista, 92081', '2585 Business Park Dr, Vista, CA, 92081')
df['Address'] = df['Address'].str.replace('401 E Arrowood Rd, Charlotte, Nc', '401 E Arrowood Rd, Charlotte, NC, 28217')
df['Address'] = df['Address'].str.replace('2900 Barberry Avenue, Columbia, Missouri 65202', '2900 Barberry Avenue, Columbia, MO, 65202')
df['Address'] = df['Address'].str.replace('2572 John F Kennedy Boulevard, Jersey City, New Jersey 07304', '2572 John F Kennedy Boulevard, Jersey City, NJ, 07304')
df['Address'] = df['Address'].str.replace('4656 N. Rancho Drive, Las Vegas, Nevada 89130', '4656 N. Rancho Drive, Las Vegas, NV, 89130')
df['Address'] = df['Address'].str.replace('6415 SE Morrison street, Portland, Oregon 97215', '6415 SE Morrison Street, Portland, OR, 97215')
df['Address'] = df['Address'].str.replace('2120 21st Avenue South, Seattle, Washington 98144', '2120 21st Avenue South, Seattle, WA, 98144')
df['Address'] = df['Address'].str.replace('4025 N. Hartford Ave., Tulsa, OK. 74106', '4025 N. Hartford Ave., Tulsa, OK, 74106')
df['Address'] = df['Address'].str.replace('6355 Willowbrook St., Wichita, Ks 67208', '6355 Willowbrook St., Wichita, KS, 67208')
df['Address'] = df['Address'].str.replace('4314 clarno dr, austin, TX 78749', '4314 Clarno Dr, Austin, TX 78749')
df['Address'] = df['Address'].str.replace('Suite 117', 'Suite 117,')
# specific
df.at[52126, 'Address'] = '1112 North G Street, Tacoma, WA, 98403'
df.at[46311, 'Address'] = '5531 Cancha de Golf Ste 202, Rancho Santa Fe, CA, 92091'
df.at[56607, 'Address'] = '4880 MacArthur Blvd. NW, Washington, DC, 20007'
df.at[27205, 'Address'] = '1018 Harding Street, Suite 112, Lafayette, LA, 70503'
df.at[50525, 'Address'] = '8740 Asheville Hwy, Spartanburg, SC, 29316'
df.at[397, 'Address'] = '1075 New Scotland Road, Albany, NY, 12208'
df.at[8207, 'Address'] = '3500 Cleveland Avenue NW, Canton, OH, 44709'
df.at[8292, 'Address'] = '231 Del Prado Blvd. S, Cape Coral, FL, 33990'
df.at[11542, 'Address'] = '1320 South Fairview Road, Columbia MO, 65203'
df.at[18372, 'Address'] = '7005 Woodbine Ave, Sacramento, CA, 95822'
df.at[19249, 'Address'] = '4424 Innovation Drive, Fort Collins, CO, 80525'
df.at[21626, 'Address'] = '1130 Eliza St.,, Green Bay, WI, 54301'
df.at[38985, 'Address'] = '2211 Saint Andrews Blvd., Panama City FL, 32405'
df.at[42682, 'Address'] = '5510 Munford Road, Raleigh NC, 27612'
df.at[46031, 'Address'] = '2850 Logan Ave, San Diego, CA, 92113'
df.at[46285, 'Address'] = '9510 Carmel Mountain Road, San Diego, CA, 92129'
df.at[54169, 'Address'] = '3535 West Messala Way, Tucson, AZ, 85746'
df.at[56231, 'Address'] = '2200 Minnesota Av. SE Washington DC, 20020'
df.at[56603, 'Address'] = '3328 Martin Luther King Junior Avenue Southeast, Washington DC, 20032'
df.at[10584, 'Address'] = '3375 W. 99th Street, Cleveland, OH, 44102'
df.at[11668, 'Address'] = '220 Stoneridge Drive, Suite 403, Columbia, SC, 29210'
df.at[23334, 'Address'] = '5300 El Camino Road, Las Vegas, NV, 89118'
df.at[34536, 'Address'] = '3000 53rd St SW, Naples, FL, 34116'
df.at[36778, 'Address'] = '2162 Mountain Blvd, Oakland, CA, 94611'
df.at[41320, 'Address'] = '6415 SE Morrison Street, Portland, OR, 97215'
df.at[42400, 'Address'] = '555 Double Eagle Ct., Suite 2000, Reno, NV, 89521'
df.at[49117, 'Address'] = '12351 8th Ave NE, Seattle, WA, 98125'
df.at[49183 , 'Address'] = '2120 21st Avenue South, Seattle, WA, 98144'
df.at[56231, 'Address'] = '2200 Minnesota Av. SE, Washington, DC, 20020'
df.at[11542, 'Address'] = '1320 South Fairview Road, Columbia, MO, 65203'
df.at[38985, 'Address'] = '2211 Saint Andrews Blvd., Panama City, FL, 32405'
df.at[42682, 'Address'] = '5510 Munford Road, Raleigh, NC, 27612'
df.at[56603, 'Address'] = '3328 Martin Luther King Junior Avenue Southeast, Washington, DC, 20032'
df['Address'] = df['Address'].str.replace('Washington, DC, Washington, DC,', 'Washington, DC,')
df['Address'] = df['Address'].str.replace('New Orleans, LA, New Orleans, LA,', 'New Orleans, LA,')
df['Address'] = df['Address'].str.replace('Albuquerque, NM, Albuquerque, NM,', 'Albuquerque, NM,' )
df['Address'] = df['Address'].str.replace('Chelsea, MA, Boston, MA,', 'Chelsea, MA,' )
df['Address'] = df['Address'].str.replace('Franklin, TN, Franklin, TN,', 'Franklin, TN,')
df['Address'] = df['Address'].str.replace('Hales Corners, WI, Milwaukee, WI', 'Hales Corners, WI,') # 50525
df['Address'] = df['Address'].str.replace('Albany NY', 'Albany, NY,' )
df['Address'] = df['Address'].str.replace('Prineville OR', 'Prineville, OR,')
df['Address'] = df['Address'].str.replace('Lancaster PA', 'Lancaster, PA,')
df['Address'] = df['Address'].str.replace('Portland OR', 'Portland, OR,')
df['Address'] = df['Address'].str.replace('San Diego CA', 'San Diego, CA,')
df['Address'] = df['Address'].str.replace('austin', 'Austin')
df['Address'] = df['Address'].str.replace('milwaukee', 'Milwaukee')
df['Address'] = df['Address'].str.replace('greeley', 'Greeley')
df['Address'] = df['Address'].str.replace('Oklahoma city', 'Oklahoma City')
df['Address'] = df['Address'].str.replace('CARMEL', 'Carmel')
df['Address'] = df['Address'].str.replace('COLORADO SPRINGS', 'Colorado Springs')
df['Address'] = df['Address'].str.replace('GREENSBORO', 'Greensboro')
df['Address'] = df['Address'].str.replace('SAN DIEGO', 'San Diego')
df['Address'] = df['Address'].str.replace('Cherry Hill/Baltimore', 'Cherry Hill')
df['Address'] = df['Address'].str.replace('AL ', 'AL, ')
df['Address'] = df['Address'].str.replace('AK ', 'AK, ')
df['Address'] = df['Address'].str.replace('AR ', 'AR, ')
df['Address'] = df['Address'].str.replace('AZ ', 'AZ, ')
df['Address'] = df['Address'].str.replace('CA ', 'CA, ')
df['Address'] = df['Address'].str.replace('CO ', 'CO, ')
df['Address'] = df['Address'].str.replace('CT ', 'CT, ')
df['Address'] = df['Address'].str.replace('DE ', 'DE, ')
df['Address'] = df['Address'].str.replace('DC ', 'DC, ')
df['Address'] = df['Address'].str.replace('FL ', 'FL, ')
df['Address'] = df['Address'].str.replace('GA ', 'GA, ')
df['Address'] = df['Address'].str.replace('HI ', 'HI, ')
df['Address'] = df['Address'].str.replace('IA ', 'IA, ')
df['Address'] = df['Address'].str.replace('ID ', 'ID, ')
df['Address'] = df['Address'].str.replace('IL ', 'IL, ')
df['Address'] = df['Address'].str.replace('IN ', 'IN, ')
df['Address'] = df['Address'].str.replace('KS ', 'KS, ')
df['Address'] = df['Address'].str.replace('KY ', 'KY, ')
df['Address'] = df['Address'].str.replace('LA ', 'LA, ')
df['Address'] = df['Address'].str.replace('MA ', 'MA, ')
df['Address'] = df['Address'].str.replace('MD ', 'MD, ')
df['Address'] = df['Address'].str.replace('ME ', 'ME, ')
df['Address'] = df['Address'].str.replace('MI ', 'MI, ')
df['Address'] = df['Address'].str.replace('MN ', 'MN, ')
df['Address'] = df['Address'].str.replace('MO ', 'MO, ')
df['Address'] = df['Address'].str.replace('MS ', 'MS, ')
df['Address'] = df['Address'].str.replace('MT ', 'MT, ')
df['Address'] = df['Address'].str.replace('NC ', 'NC, ')
df['Address'] = df['Address'].str.replace('ND ', 'ND, ')
df['Address'] = df['Address'].str.replace('NH ', 'NH, ')
df['Address'] = df['Address'].str.replace('NJ ', 'NJ, ')
df['Address'] = df['Address'].str.replace('NM ', 'NM, ')
df['Address'] = df['Address'].str.replace('NV ', 'NV, ')
df['Address'] = df['Address'].str.replace('NY ', 'NY, ')
df['Address'] = df['Address'].str.replace('OH ', 'OH, ')
df['Address'] = df['Address'].str.replace('OK ', 'OK, ')
df['Address'] = df['Address'].str.replace('OR ', 'OR, ')
df['Address'] = df['Address'].str.replace('PA ', 'PA, ')
df['Address'] = df['Address'].str.replace('RI ', 'RI, ')
df['Address'] = df['Address'].str.replace('SC ', 'SC, ')
df['Address'] = df['Address'].str.replace('SD ', 'SD, ')
df['Address'] = df['Address'].str.replace('TN ', 'TN, ')
df['Address'] = df['Address'].str.replace('TX ', 'TX, ')
df['Address'] = df['Address'].str.replace('UT ', 'UT, ')
df['Address'] = df['Address'].str.replace('VA ', 'VA, ')
df['Address'] = df['Address'].str.replace('VT ', 'VT, ')
df['Address'] = df['Address'].str.replace('WA ', 'WA, ')
df['Address'] = df['Address'].str.replace('WI ', 'WI, ')
df['Address'] = df['Address'].str.replace('WV ', 'WV, ')
df['Address'] = df['Address'].str.replace('WY ', 'WY, ')
###Output
_____no_output_____
###Markdown
Split Address Column- look for more discrepancies Create lengths to find discrepancies in 'Address' column
###Code
df['City, State'] = df['Address'].str.split(',')
# Finding length because there are anomalies with the information in the address column
df['Length'] = df['City, State'].apply(lambda x: len(x) if x != None else 0 )
# 4 is the expected length
df['Length'].unique()
###Output
_____no_output_____
###Markdown
Create new dataframes for different lengths Length 1
###Code
# No Address - removing from df
df = df[df['Length'] != 1]
###Output
_____no_output_____
###Markdown
Length 7- https://stackoverflow.com/questions/6266727/python-cut-off-the-last-word-of-a-sentence- https://towardsdatascience.com/a-really-simple-way-to-edit-row-by-row-in-a-pandas-dataframe-75d339cbd313
###Code
df.loc[df['Length'] == 7]
for index in df.index:
if df.loc[index, 'Length'] == 7:
content = df.loc[index, 'Address']
df.loc[index, 'Address'] = ', '.join(content.split(', ')[:-3])
###Output
_____no_output_____
###Markdown
Length 8
###Code
df.loc[df['Length'] == 8]
for index in df.index:
if df.loc[index, 'Length'] == 8:
content = df.loc[index, 'Address']
df.loc[index, 'Address'] = ', '.join(content.split(', ')[:-3])
###Output
_____no_output_____
###Markdown
Check
###Code
df = df.drop(columns = ['City, State', 'Length'])
df['City, State'] = df['Address'].str.split(',')
# Checking string lengths after cleaning
df['Length'] = df['City, State'].apply(lambda x: len(x) if x != None else 0 )
df['Length'].unique()
###Output
_____no_output_____
###Markdown
Create City, State columns
###Code
df['City'] = df['City, State'].str[-3]
df['State'] = df['City, State'].str[-2]
###Output
_____no_output_____
###Markdown
Check Unique Cities
###Code
print(df['City'].nunique())
df['City'].unique()
###Output
396
###Markdown
Check Unique States
###Code
print(df['State'].nunique())
df['State'].unique()
df.loc[df['State'] == '']
df.at[32718, 'Address'] = '5425 S. 111th Street, Hales Corners, WI, 53222'
df.at[32718, 'State'] = 'WI'
df.at[32718, 'City'] = 'Hales Corners'
###Output
_____no_output_____
###Markdown
Update School Score- change to int so data can be sorted by the value
###Code
df['Score'] = df['Score'].str.replace('/10', '')
df['Score'] = df['Score'].astype(int)
###Output
_____no_output_____
###Markdown
Separating into PK, K, Elementary, Middle, High School- https://stackoverflow.com/questions/61877712/check-if-an-item-in-a-list-is-available-in-a-column-which-is-of-type-list
###Code
def parse_grades(grades_string):
GRADES = ['PK', 'K', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', 'Ungraded']
# Remove & for grades list
grades_string = grades_string.replace(' &', ',')
# Grades list - will add to separated grade string to grades
grades = []
# split strings based on ','
string_list = grades_string.split(',')
# look for sections of list with '-'
dash = "-"
for i in range(len(string_list)):
clean_string = string_list[i].strip()
if dash in clean_string:
# split using '-', loop and add to grades variable
start_grade, end_grade = clean_string.split(dash)
grades += GRADES[GRADES.index(start_grade) : GRADES.index(end_grade)+ 1]
else:
# add string to grades
grades.append(clean_string)
return grades
print(df['Grades'].nunique())
unique_grades_combination = df['Grades'].unique()
def test_complete_dataset(unique_grades_combination):
# create a loop that goes thru dataset and invoke parse_grades with each element
separated_grades_list = []
for i in unique_grades_combination:
separated_grades_list.append(parse_grades(i))
dictionary_grade_list = dict(zip(unique_grades_combination, separated_grades_list))
return dictionary_grade_list
dictionary = test_complete_dataset(unique_grades_combination)
df['Clean_Grades'] = df['Grades'].map(dictionary)
high_school = ['9', '10', '11', '12']
middle_school = ['6', '7', '8']
elementary = ['K', '1', '2', '3', '4', '5']
pre_k = ['PK']
set1 = set(high_school)
df['High School (9-12)'] = df['Clean_Grades'].apply(lambda x: any([k in x for k in set1]))
set2 = set(middle_school)
df['Middle School (6-8)'] = df['Clean_Grades'].apply(lambda x: any([k in x for k in set2]))
set3 = set(elementary)
df['Elementary (K-5)'] = df['Clean_Grades'].apply(lambda x: any([k in x for k in set3]))
set4 = set(pre_k)
df['Pre-Kindergarten (PK)'] = df['Clean_Grades'].apply(lambda x: any([k in x for k in set4]))
df[['High School (9-12)', 'Middle School (6-8)', 'Elementary (K-5)', 'Pre-Kindergarten (PK)']] = df[['High School (9-12)', 'Middle School (6-8)', 'Elementary (K-5)', 'Pre-Kindergarten (PK)']] * 1
df['Grades'] = df['Grades'].str.replace(' & Ungraded', '')
###Output
_____no_output_____
###Markdown
Check that parse grades and categorizing schools worked
###Code
unique_grades_combination
df.loc[df['Grades'] == 'PK-6']
df.loc[df['Grades'] == 'K-1, 4, 7-9, 11']
df.loc[df['Grades'] == 'PK, 3-6, 8, 10, 12']
###Output
_____no_output_____
###Markdown
Filling in NaNs
###Code
df.isnull().sum()
df[['Total Students Enrolled', 'Students per teacher']] = df[['Total Students Enrolled', 'Students per teacher']].fillna(0)
df['District'] = df['District'].fillna('Unavailable')
df.isnull().sum()
df.loc[df['Students per teacher'] == 'NaN']
df.loc[df['District'] == 'NaN']
df.loc[df['Total Students Enrolled'] == 'NaN']
###Output
_____no_output_____
###Markdown
Cleaning Extra Spaces- drop unneccessary columns
###Code
df['City'] = df['City'].str.strip()
df['State'] = df['State'].str.strip()
df['Address'] = df['Address'].str.strip()
df['School'] = df['School'].str.strip()
df['Rating'] = df['Rating'].str.strip()
df['Address'] = df['Address'].str.strip()
df['Type'] = df['Type'].str.strip()
df['Grades'] = df['Grades'].str.strip()
df['Students per teacher'] = df['Students per teacher'].str.strip()
df['District'] = df['District'].str.strip()
# Drop
df = df.drop(columns = ['City, State', 'Length', 'Clean_Grades'])
###Output
_____no_output_____
###Markdown
Save
###Code
df.to_csv('files/merge_schools/schools_cleaned.csv', index = False)
###Output
_____no_output_____
###Markdown
Part 2- clean csv of cities that did not scrape the first time
###Code
import pandas as pd
pd.set_option('display.max_colwidth', None)
pd.set_option('display.max_rows', 10)
missing = pd.read_csv('files/scrape_schools/missing_schools.csv')
print(missing.shape)
missing.head()
# Adresses
missing['Address'] = missing['Address'].str.replace('5115 North Mont Clare Avenue Chicago, IL 60656, Chicago, IL, 60656', '5115 North Mont Clare Avenue, Chicago, IL, 60656')
missing['Address'] = missing['Address'].str.replace('11000 Scott St, Houston, Houston, TX, 77047', '11000 Scott St, Houston, TX, 77047')
missing['Address'] = missing['Address'].str.replace('3911 Campbell Rd., Ho, Houston, TX, 77080', '3911 Campbell Rd., Houston, TX, 77080')
missing['Address'] = missing['Address'].str.replace('8805 Ferndale, Houston, Houston, TX, 77017', '8805 Ferndale, Houston, TX, 77017')
missing['Address'] = missing['Address'].str.replace('4240 E. Olympic Blvd. Los Angeles, CA 90023, Los Angeles, CA, 90063', '4240 E. Olympic Blvd., Los Angeles, CA, 90023')
missing['Address'] = missing['Address'].str.replace('1263 S Soto St Los Angeles, CA 90023 , Los Angeles, CA, 90031', '1263 S Soto St, Los Angeles, CA, 90023')
missing['Address'] = missing['Address'].str.replace('95 NW 23rd St Miami, FL 33127, Miami, FL, 33137', '95 NW 23rd St, Miami, FL, 33127')
missing['Address'] = missing['Address'].str.replace('12101 SW 34 St. MIAMI, FL. 33175, Miami, FL, 33175', '12101 SW 34 St., Miami, FL, 33175')
missing['Address'] = missing['Address'].str.replace('332 West 43rd Street, New York NY 10036, New York, NY, 10025', '332 West 43rd Street, New York, NY, 10025')
missing['Address'] = missing['Address'].str.replace('120 Wadsworth Avenue New York, N.Y. 10033, New York, NY, 10033', '120 Wadsworth Avenue, New York, NY, 10033')
missing['Address'] = missing['Address'].str.replace('5311 Merlin Dr San Antonio, Texas 78218, San Antonio, TX, 78218', '5311 Merlin Dr, San Antonio, TX, 78218')
missing['Address'] = missing['Address'].str.replace('8565 Ewing Halsell Drive San Antonio, Texas 78229, San Antonio, TX, 78229', '8565 Ewing Halsell Drive, San Antonio, TX, 78229')
missing['Address'] = missing['Address'].str.replace('4419 S Normandie Ave La, Ca 90037, Los Angeles, CA, 90007', '4419 S Normandie Ave, Los Angeles, CA, 90037')
missing['Address'] = missing['Address'].str.replace('2521 Grove Street, Blue Island, IL 60406, Chicago, IL, 60643', '2521 Grove Street, Blue Island, IL, 60406')
missing['Address'] = missing['Address'].str.replace('1913 Southwest Fwy #B, Houston, TX 77098, Houston, TX, 77030', '1913 Southwest Fwy #B, Houston, TX, 77098')
missing['Address'] = missing['Address'].str.replace('4009 Sherwood Lane, Houston, TX 77092, Houston, TX, 77092', '4009 Sherwood Lane, Houston, TX, 77092')
missing['Address'] = missing['Address'].str.replace('1600 W. Imperial Highway, Los Angeles, CA 90047, Los Angeles, CA, 90045', '1600 W. Imperial Highway, Los Angeles, CA, 90047')
missing['Address'] = missing['Address'].str.replace('131 E. 50th Street, Los Angles, CA 90011, Los Angeles, CA, 90011', '131 E. 50th Street, Los Angeles, CA, 90011')
missing['Address'] = missing['Address'].str.replace('4301 West Martin Luther King Jr. Boulevard, Los Angeles, CA 90008, Los Angeles, CA, 90016', '4301 West Martin Luther King Jr. Boulevard, Los Angeles, CA, 90008')
missing['Address'] = missing['Address'].str.replace('8515 Kansas Avenue, Los Angeles, CA 90044, Los Angeles, CA, 90047', '8515 Kansas Avenue, Los Angeles, CA, 90044')
missing['Address'] = missing['Address'].str.replace('1989 Westwood Blvd, LA, CA 90025, Los Angeles, CA, 90025', '1989 Westwood Blvd, Los Angeles, CA, 90025')
missing['Address'] = missing['Address'].str.replace('6601 NW 167th St, Hialeah, FL 33015, Miami, FL, 33015', '6601 NW 167th St, Hialeah, FL, 33015')
missing['Address'] = missing['Address'].str.replace('7412 Sunset Drive, Miami, FL 33143, Miami, FL, 33176', '7412 Sunset Drive, Miami, FL, 33143')
missing['Address'] = missing['Address'].str.replace('259 10th Avenue, New York, NY 10001, New York, NY, 10001', '259 10th Avenue, New York, NY, 10001')
missing['Address'] = missing['Address'].str.replace('2212 Third Avenue, 2nd Floor, New York, NY 10035, New York, NY, 10065', '2212 Third Avenue, 2nd Floor, New York, NY, 10035')
missing['Address'] = missing['Address'].str.replace('10126 South Western, CHICAGO, IL, 60643', '10126 South Western, Chicago, IL, 60643')
missing['Address'] = missing['Address'].str.replace('38 delancey st., new york, NY, 10002', '38 Delancey St., New York, NY, 10002')
missing['Address'] = missing['Address'].str.replace('40 Rector Street, new york, NY, 10006', '40 Rector Street, New York, NY, 10006')
# specific
missing.at[3886, 'Address'] = '4240 E. Olympic Blvd., Los Angeles, CA, 90023'
###Output
_____no_output_____
###Markdown
Split Address Column
###Code
missing['City, State'] = missing['Address'].str.split(',')
###Output
_____no_output_____
###Markdown
Create Lengths to find discrepancies in Address column
###Code
# Finding length because there are anomalies with the information in the address column
missing['Length'] = missing['City, State'].apply(lambda x: len(x) if x != None else 0 )
# 4 is the expected length
missing['Length'].unique()
###Output
_____no_output_____
###Markdown
Create City, State column
###Code
missing['City'] = missing['City, State'].str[-3]
missing['State'] = missing['City, State'].str[-2]
###Output
_____no_output_____
###Markdown
Check for Unique Citiesshould be about 5 (maybe slightly more after cleaning addresses)
###Code
print(missing['City'].nunique())
missing['City'].unique()
###Output
8
###Markdown
Check for Unique States- expecting 5
###Code
print(missing['State'].nunique())
missing['State'].unique()
###Output
5
###Markdown
Update School Scores
###Code
missing['Score'] = missing['Score'].str.replace('/10', '')
missing['Score'] = missing['Score'].astype(int)
###Output
_____no_output_____
###Markdown
Separate into PK, Elementary, Middle, High School
###Code
def parse_grades(grades_string):
GRADES = ['PK', 'K', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', 'Ungraded']
# Remove & for grades list
grades_string = grades_string.replace(' &', ',')
# Grades list - will add to separated grade string to grades
grades = []
# split strings based on ','
string_list = grades_string.split(',')
# look for sections of list with '-'
dash = "-"
for i in range(len(string_list)):
clean_string = string_list[i].strip()
if dash in clean_string:
# split using '-', loop and add to grades variable
start_grade, end_grade = clean_string.split(dash)
grades += GRADES[GRADES.index(start_grade) : GRADES.index(end_grade)+ 1]
else:
# add string to grades
grades.append(clean_string)
return grades
print(missing['Grades'].nunique())
unique_grades_combination = missing['Grades'].unique()
def test_complete_dataset(unique_grades_combination):
# create a loop that goes thru dataset and invoke parse_grades with each element
separated_grades_list = []
for i in unique_grades_combination:
separated_grades_list.append(parse_grades(i))
dictionary_grade_list = dict(zip(unique_grades_combination, separated_grades_list))
return dictionary_grade_list
dictionary = test_complete_dataset(unique_grades_combination)
missing['Clean_Grades'] = missing['Grades'].map(dictionary)
# https://stackoverflow.com/questions/53350793/how-to-check-if-pandas-column-has-value-from-list-of-string
high_school = ['9', '10', '11', '12']
middle_school = ['6', '7', '8']
elementary = ['K', '1', '2', '3', '4', '5']
pre_k = ['PK']
set1 = set(high_school)
missing['High School (9-12)'] = missing['Clean_Grades'].apply(lambda x: any([k in x for k in set1]))
set2 = set(middle_school)
missing['Middle School (6-8)'] = missing['Clean_Grades'].apply(lambda x: any([k in x for k in set2]))
set3 = set(elementary)
missing['Elementary (K-5)'] = missing['Clean_Grades'].apply(lambda x: any([k in x for k in set3]))
set4 = set(pre_k)
missing['Pre-Kindergarten (PK)'] = missing['Clean_Grades'].apply(lambda x: any([k in x for k in set4]))
missing[['High School (9-12)', 'Middle School (6-8)', 'Elementary (K-5)', 'Pre-Kindergarten (PK)']] = missing[['High School (9-12)', 'Middle School (6-8)', 'Elementary (K-5)', 'Pre-Kindergarten (PK)']] * 1
missing['Grades'] = missing['Grades'].str.replace(' & Ungraded', '')
###Output
_____no_output_____
###Markdown
Check
###Code
missing.loc[missing['Grades'] == 'PK-3, 5-8']
###Output
_____no_output_____
###Markdown
Filling NaNs
###Code
missing.isna().sum()
missing[['Total Students Enrolled', 'Students per teacher']] = missing[['Total Students Enrolled', 'Students per teacher']].fillna(0)
missing['District'] = missing['District'].fillna('Unavailable')
missing.isna().sum()
###Output
_____no_output_____
###Markdown
Clean and Drop columns
###Code
missing['City'] = missing['City'].str.strip()
missing['State'] = missing['State'].str.strip()
missing['Address'] = missing['Address'].str.strip()
missing['School'] = missing['School'].str.strip()
missing['Rating'] = missing['Rating'].str.strip()
missing['Address'] = missing['Address'].str.strip()
missing['Type'] = missing['Type'].str.strip()
missing['Grades'] = missing['Grades'].str.strip()
missing['Students per teacher'] = missing['Students per teacher'].str.strip()
missing['District'] = missing['District'].str.strip()
# Drop
missing = missing.drop(columns = ['City, State', 'Length', 'Clean_Grades'])
missing.to_csv('files/merge_schools/missing_schools_cleaned.csv', index = False)
###Output
_____no_output_____
###Markdown
Part 3 Merge1. remove cities in missing_cities from df to prevent duplicates - 'Chicago', 'Blue Island', 'Houston', 'Los Angeles', 'Miami', 'Hialeah', 'New York', 'San Antonio'2. Merge3. Clean
###Code
import pandas as pd
pd.set_option('display.max_colwidth', None)
pd.set_option('display.max_rows', 25)
df = pd.read_csv('files/merge_schools/schools_cleaned.csv')
missing = pd.read_csv('files/merge_schools/missing_schools_cleaned.csv')
print(df.shape)
df.head()
print(missing.shape)
missing.head()
# Removing cities from schools to prevent duplicate cities
df = df[df['City'] != 'Chicago']
df = df[df['City'] != 'Houston']
df = df[df['City'] != 'Los Angeles']
df = df[df['City'] != 'Miami']
df = df[df['City'] != 'New York']
df = df[df['City'] != 'San Antonio']
frames = [df, missing]
final = pd.concat(frames)
print(final.shape) # 58779 + 8941 - 150(dropped cities) = 67570
final.head()
final.isnull().sum()
final['Students per teacher'] = final['Students per teacher'].fillna(0)
final.isnull().sum()
final = final.drop(columns = ['Unnamed: 0'])
final.to_csv('csv/final_school.csv', index = False)
final.to_csv('../../datasets_to_merge/labs2/files/final_school.csv')
###Output
_____no_output_____
###Markdown
Check that parse grades and categorizing schools worked
###Code
unique_grades_combination
df.loc[df['Grades'] == 'PK-6']
df.loc[df['Grades'] == 'K-1, 4, 7-9, 11']
df.loc[df['Grades'] == 'PK, 3-6, 8, 10, 12']
###Output
_____no_output_____
###Markdown
Filling in NaNs
###Code
df.isnull().sum()
df[['Total Students Enrolled', 'Students per teacher']] = df[['Total Students Enrolled', 'Students per teacher']].fillna(0)
df['District'] = df['District'].fillna('Unavailable')
df.isnull().sum()
df.loc[df['Students per teacher'] == 'NaN']
df.loc[df['District'] == 'NaN']
df.loc[df['Total Students Enrolled'] == 'NaN']
###Output
_____no_output_____
###Markdown
Cleaning Extra Spaces- drop unneccessary columns
###Code
df['City'] = df['City'].str.strip()
df['State'] = df['State'].str.strip()
df['Address'] = df['Address'].str.strip()
df['School'] = df['School'].str.strip()
df['Rating'] = df['Rating'].str.strip()
df['Address'] = df['Address'].str.strip()
df['Type'] = df['Type'].str.strip()
df['Grades'] = df['Grades'].str.strip()
df['Students per teacher'] = df['Students per teacher'].str.strip()
df['District'] = df['District'].str.strip()
# Drop
df = df.drop(columns = ['City, State', 'Length', 'Clean_Grades'])
###Output
_____no_output_____
###Markdown
Save
###Code
df.to_csv('files/merge_schools/schools_cleaned.csv', index = False)
###Output
_____no_output_____
###Markdown
Part 2- clean csv of cities that did not scrape the first time
###Code
import pandas as pd
pd.set_option('display.max_colwidth', None)
pd.set_option('display.max_rows', 10)
missing = pd.read_csv('files/scrape_schools/missing_schools.csv')
print(missing.shape)
missing.head()
# Adresses
missing['Address'] = missing['Address'].str.replace('5115 North Mont Clare Avenue Chicago, IL 60656, Chicago, IL, 60656', '5115 North Mont Clare Avenue, Chicago, IL, 60656')
missing['Address'] = missing['Address'].str.replace('11000 Scott St, Houston, Houston, TX, 77047', '11000 Scott St, Houston, TX, 77047')
missing['Address'] = missing['Address'].str.replace('3911 Campbell Rd., Ho, Houston, TX, 77080', '3911 Campbell Rd., Houston, TX, 77080')
missing['Address'] = missing['Address'].str.replace('8805 Ferndale, Houston, Houston, TX, 77017', '8805 Ferndale, Houston, TX, 77017')
missing['Address'] = missing['Address'].str.replace('4240 E. Olympic Blvd. Los Angeles, CA 90023, Los Angeles, CA, 90063', '4240 E. Olympic Blvd., Los Angeles, CA, 90023')
missing['Address'] = missing['Address'].str.replace('1263 S Soto St Los Angeles, CA 90023 , Los Angeles, CA, 90031', '1263 S Soto St, Los Angeles, CA, 90023')
missing['Address'] = missing['Address'].str.replace('95 NW 23rd St Miami, FL 33127, Miami, FL, 33137', '95 NW 23rd St, Miami, FL, 33127')
missing['Address'] = missing['Address'].str.replace('12101 SW 34 St. MIAMI, FL. 33175, Miami, FL, 33175', '12101 SW 34 St., Miami, FL, 33175')
missing['Address'] = missing['Address'].str.replace('332 West 43rd Street, New York NY 10036, New York, NY, 10025', '332 West 43rd Street, New York, NY, 10025')
missing['Address'] = missing['Address'].str.replace('120 Wadsworth Avenue New York, N.Y. 10033, New York, NY, 10033', '120 Wadsworth Avenue, New York, NY, 10033')
missing['Address'] = missing['Address'].str.replace('5311 Merlin Dr San Antonio, Texas 78218, San Antonio, TX, 78218', '5311 Merlin Dr, San Antonio, TX, 78218')
missing['Address'] = missing['Address'].str.replace('8565 Ewing Halsell Drive San Antonio, Texas 78229, San Antonio, TX, 78229', '8565 Ewing Halsell Drive, San Antonio, TX, 78229')
missing['Address'] = missing['Address'].str.replace('4419 S Normandie Ave La, Ca 90037, Los Angeles, CA, 90007', '4419 S Normandie Ave, Los Angeles, CA, 90037')
missing['Address'] = missing['Address'].str.replace('2521 Grove Street, Blue Island, IL 60406, Chicago, IL, 60643', '2521 Grove Street, Blue Island, IL, 60406')
missing['Address'] = missing['Address'].str.replace('1913 Southwest Fwy #B, Houston, TX 77098, Houston, TX, 77030', '1913 Southwest Fwy #B, Houston, TX, 77098')
missing['Address'] = missing['Address'].str.replace('4009 Sherwood Lane, Houston, TX 77092, Houston, TX, 77092', '4009 Sherwood Lane, Houston, TX, 77092')
missing['Address'] = missing['Address'].str.replace('1600 W. Imperial Highway, Los Angeles, CA 90047, Los Angeles, CA, 90045', '1600 W. Imperial Highway, Los Angeles, CA, 90047')
missing['Address'] = missing['Address'].str.replace('131 E. 50th Street, Los Angles, CA 90011, Los Angeles, CA, 90011', '131 E. 50th Street, Los Angeles, CA, 90011')
missing['Address'] = missing['Address'].str.replace('4301 West Martin Luther King Jr. Boulevard, Los Angeles, CA 90008, Los Angeles, CA, 90016', '4301 West Martin Luther King Jr. Boulevard, Los Angeles, CA, 90008')
missing['Address'] = missing['Address'].str.replace('8515 Kansas Avenue, Los Angeles, CA 90044, Los Angeles, CA, 90047', '8515 Kansas Avenue, Los Angeles, CA, 90044')
missing['Address'] = missing['Address'].str.replace('1989 Westwood Blvd, LA, CA 90025, Los Angeles, CA, 90025', '1989 Westwood Blvd, Los Angeles, CA, 90025')
missing['Address'] = missing['Address'].str.replace('6601 NW 167th St, Hialeah, FL 33015, Miami, FL, 33015', '6601 NW 167th St, Hialeah, FL, 33015')
missing['Address'] = missing['Address'].str.replace('7412 Sunset Drive, Miami, FL 33143, Miami, FL, 33176', '7412 Sunset Drive, Miami, FL, 33143')
missing['Address'] = missing['Address'].str.replace('259 10th Avenue, New York, NY 10001, New York, NY, 10001', '259 10th Avenue, New York, NY, 10001')
missing['Address'] = missing['Address'].str.replace('2212 Third Avenue, 2nd Floor, New York, NY 10035, New York, NY, 10065', '2212 Third Avenue, 2nd Floor, New York, NY, 10035')
missing['Address'] = missing['Address'].str.replace('10126 South Western, CHICAGO, IL, 60643', '10126 South Western, Chicago, IL, 60643')
missing['Address'] = missing['Address'].str.replace('38 delancey st., new york, NY, 10002', '38 Delancey St., New York, NY, 10002')
missing['Address'] = missing['Address'].str.replace('40 Rector Street, new york, NY, 10006', '40 Rector Street, New York, NY, 10006')
# specific
missing.at[3886, 'Address'] = '4240 E. Olympic Blvd., Los Angeles, CA, 90023'
###Output
_____no_output_____
###Markdown
Split Address Column
###Code
missing['City, State'] = missing['Address'].str.split(',')
###Output
_____no_output_____
###Markdown
Create Lengths to find discrepancies in Address column
###Code
# Finding length because there are anomalies with the information in the address column
missing['Length'] = missing['City, State'].apply(lambda x: len(x) if x != None else 0 )
# 4 is the expected length
missing['Length'].unique()
###Output
_____no_output_____
###Markdown
Create City, State column
###Code
missing['City'] = missing['City, State'].str[-3]
missing['State'] = missing['City, State'].str[-2]
###Output
_____no_output_____
###Markdown
Check for Unique Citiesshould be about 5 (maybe slightly more after cleaning addresses)
###Code
print(missing['City'].nunique())
missing['City'].unique()
###Output
8
###Markdown
Check for Unique States- expecting 5
###Code
print(missing['State'].nunique())
missing['State'].unique()
###Output
5
###Markdown
Update School Scores
###Code
missing['Score'] = missing['Score'].str.replace('/10', '')
missing['Score'] = missing['Score'].astype(int)
###Output
_____no_output_____
###Markdown
Separate into PK, Elementary, Middle, High School
###Code
def parse_grades(grades_string):
GRADES = ['PK', 'K', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', 'Ungraded']
# Remove & for grades list
grades_string = grades_string.replace(' &', ',')
# Grades list - will add to separated grade string to grades
grades = []
# split strings based on ','
string_list = grades_string.split(',')
# look for sections of list with '-'
dash = "-"
for i in range(len(string_list)):
clean_string = string_list[i].strip()
if dash in clean_string:
# split using '-', loop and add to grades variable
start_grade, end_grade = clean_string.split(dash)
grades += GRADES[GRADES.index(start_grade) : GRADES.index(end_grade)+ 1]
else:
# add string to grades
grades.append(clean_string)
return grades
print(missing['Grades'].nunique())
unique_grades_combination = missing['Grades'].unique()
def test_complete_dataset(unique_grades_combination):
# create a loop that goes thru dataset and invoke parse_grades with each element
separated_grades_list = []
for i in unique_grades_combination:
separated_grades_list.append(parse_grades(i))
dictionary_grade_list = dict(zip(unique_grades_combination, separated_grades_list))
return dictionary_grade_list
dictionary = test_complete_dataset(unique_grades_combination)
missing['Clean_Grades'] = missing['Grades'].map(dictionary)
# https://stackoverflow.com/questions/53350793/how-to-check-if-pandas-column-has-value-from-list-of-string
high_school = ['9', '10', '11', '12']
middle_school = ['6', '7', '8']
elementary = ['K', '1', '2', '3', '4', '5']
pre_k = ['PK']
set1 = set(high_school)
missing['High School (9-12)'] = missing['Clean_Grades'].apply(lambda x: any([k in x for k in set1]))
set2 = set(middle_school)
missing['Middle School (6-8)'] = missing['Clean_Grades'].apply(lambda x: any([k in x for k in set2]))
set3 = set(elementary)
missing['Elementary (K-5)'] = missing['Clean_Grades'].apply(lambda x: any([k in x for k in set3]))
set4 = set(pre_k)
missing['Pre-Kindergarten (PK)'] = missing['Clean_Grades'].apply(lambda x: any([k in x for k in set4]))
missing[['High School (9-12)', 'Middle School (6-8)', 'Elementary (K-5)', 'Pre-Kindergarten (PK)']] = missing[['High School (9-12)', 'Middle School (6-8)', 'Elementary (K-5)', 'Pre-Kindergarten (PK)']] * 1
missing['Grades'] = missing['Grades'].str.replace(' & Ungraded', '')
###Output
_____no_output_____
###Markdown
Check
###Code
missing.loc[missing['Grades'] == 'PK-3, 5-8']
###Output
_____no_output_____
###Markdown
Filling NaNs
###Code
missing.isna().sum()
missing[['Total Students Enrolled', 'Students per teacher']] = missing[['Total Students Enrolled', 'Students per teacher']].fillna(0)
missing['District'] = missing['District'].fillna('Unavailable')
missing.isna().sum()
###Output
_____no_output_____
###Markdown
Clean and Drop columns
###Code
missing['City'] = missing['City'].str.strip()
missing['State'] = missing['State'].str.strip()
missing['Address'] = missing['Address'].str.strip()
missing['School'] = missing['School'].str.strip()
missing['Rating'] = missing['Rating'].str.strip()
missing['Address'] = missing['Address'].str.strip()
missing['Type'] = missing['Type'].str.strip()
missing['Grades'] = missing['Grades'].str.strip()
missing['Students per teacher'] = missing['Students per teacher'].str.strip()
missing['District'] = missing['District'].str.strip()
# Drop
missing = missing.drop(columns = ['City, State', 'Length', 'Clean_Grades'])
missing.to_csv('files/merge_schools/missing_schools_cleaned.csv', index = False)
###Output
_____no_output_____
###Markdown
Part 3 Merge1. remove cities in missing_cities from df to prevent duplicates - 'Chicago', 'Blue Island', 'Houston', 'Los Angeles', 'Miami', 'Hialeah', 'New York', 'San Antonio'2. Merge3. Clean
###Code
import pandas as pd
pd.set_option('display.max_colwidth', None)
pd.set_option('display.max_rows', 25)
df = pd.read_csv('files/merge_schools/schools_cleaned.csv')
missing = pd.read_csv('files/merge_schools/missing_schools_cleaned.csv')
print(df.shape)
df.head()
print(missing.shape)
missing.head()
# Removing cities from schools to prevent duplicate cities
df = df[df['City'] != 'Chicago']
df = df[df['City'] != 'Houston']
df = df[df['City'] != 'Los Angeles']
df = df[df['City'] != 'Miami']
df = df[df['City'] != 'New York']
df = df[df['City'] != 'San Antonio']
frames = [df, missing]
final = pd.concat(frames)
print(final.shape) # 58779 + 8941 - 150(dropped cities) = 67570
final.head()
final.isnull().sum()
final['Students per teacher'] = final['Students per teacher'].fillna(0)
final.isnull().sum()
final = final.drop(columns = ['Unnamed: 0'])
final.to_csv('csv/final_school.csv', index = False)
final.to_csv('../../datasets_to_merge/labs2/files/final_school.csv')
###Output
_____no_output_____
###Markdown
Clean Schools.csv1. Split address column2. Look at length - this displays discrepancies in addresses (looking for lengths 1, 3, 4)3. Create a city and state column - consistency with other data4. Create columns for schools categories - pk, k, elementary, middle, and high school5. Make score column an int
###Code
import pandas as pd
pd.set_option('display.max_colwidth', None)
pd.set_option('display.max_rows', 10)
df = pd.read_csv('files/scrape_schools/schools.csv')
print(df.shape)
df.head()
###Output
(58782, 10)
###Markdown
Clean Addresses
###Code
df['Address'] = df['Address'].str.replace('2100 Morse Road, Suite 4609, Columbus, OH 43229, Columbus, OH, 43211', '2100 Morse Road, Suite 4609, Columbus, OH, 43229')
df['Address'] = df['Address'].str.replace('2501 Syracuse Street, Denver, Colorado, 80238, Denver, CO, 80238', '2501 Syracuse Street, Denver, CO, 80238')
df['Address'] = df['Address'].str.replace('4450 West Eau Gallie Boulevard, Suite 180, Melbourne, FL 32934, Melbourne, FL, 32934', '4450 West Eau Gallie Boulevard, Suite 180, Melbourne, FL, 32934')
df['Address'] = df['Address'].str.replace('4530 MacArthur Blvd, NW, Washington, DC, Washington, DC, 20007', '4530 MacArthur Blvd NW, Washington, DC, 20007')
df['Address'] = df['Address'].str.replace('1075 New Scotland Road, Albany NY 12208, Albany, NY, 12208', '1075 New Scotland Road, Albany NY, 12208')
df['Address'] = df['Address'].str.replace('216 Shelburne Road Asheville, NC 28806, Asheville, NC, 28806', '216 Shelburne Road, Asheville, NC, 28806')
df['Address'] = df['Address'].str.replace('26450 RR 12 Dripping Springs, TX 78620, Austin, TX, 78736', '26450 RR 12, Dripping Springs, TX, 78620')
df['Address'] = df['Address'].str.replace('NE Stoneridge Loop, Prineville OR 97754, Bend, OR, 97702', 'NE Stoneridge Loop, Prineville, OR, 97754')
df['Address'] = df['Address'].str.replace('5225 - Seventy seven Center Dr, Charlotte NC 28217, Charlotte, NC, 28217', '5225 77 Center Dr, Charlotte, NC, 28217')
df['Address'] = df['Address'].str.replace('3375 W. 99th Street Cleveland, OH 44102, Cleveland, OH, 44111', '3375 W. 99th Street, Cleveland, OH, 44102')
df['Address'] = df['Address'].str.replace('21 Broadmoor Avenue Colorado Springs, CO 80906, Colorado Springs, CO, 80906', '21 Broadmoor Avenue, Colorado Springs, CO, 80906')
df['Address'] = df['Address'].str.replace('220 Stoneridge Drive Suite 403 Columbia, SC 29210 , Columbia, SC, 29210', '220 Stoneridge Drive, Suite 403, Columbia, SC, 29210')
df['Address'] = df['Address'].str.replace('2247 South Ridgewood South Daytona, Florida 32119, Daytona Beach, FL, 32117', '2247 South Ridgewood, South Daytona, FL, 32119')
df['Address'] = df['Address'].str.replace('7005 Woodbine Ave Sacramento, Ca. 95822, Fairfield, CA, 94534', '7005 Woodbine Ave, Sacramento, CA, 95822')
df['Address'] = df['Address'].str.replace('4424 Innovation Drive Fort Collins, Colorado 80525, Fort Collins, CO, 80525', '4424 Innovation Drive, Fort Collins, CO, 80525')
df['Address'] = df['Address'].str.replace('5300 El Camino Road Las Vegas, NV 89118, Henderson, NV, 89014', '5300 El Camino Road, Las Vegas, NV, 89118')
df['Address'] = df['Address'].str.replace('9039 Beach Blvd Jacksonville, FL 32216, Jacksonville, FL, 32207', '9039 Beach Blvd, Jacksonville, FL, 32216')
df['Address'] = df['Address'].str.replace('390 New Holland Pike, Lancaster PA 17601, Lancaster, PA, 17601', '390 New Holland Pike, Lancaster, PA, 17601')
df['Address'] = df['Address'].str.replace('4801. S. Sandhill Drive LV, NV 89121, Las Vegas, NV, 89123', '4801. S. Sandhill Drive, Las Vegas, NV, 89123')
df['Address'] = df['Address'].str.replace('2727 Stinson Blvd. NE Minneapolis, MN 55418, Minneapolis, MN, 55418', '2727 Stinson Blvd. NE, Minneapolis, MN, 55418')
df['Address'] = df['Address'].str.replace('3000 53rd St SW Naples, FL 34116, Naples, FL, 34116', '3000 53rd St SW, Naples, FL, 34116')
df['Address'] = df['Address'].str.replace('177 W Klein Rd. New Braunfels, TX 78130, New Braunfels, TX, 78130', '177 W Klein Rd., New Braunfels, TX, 78130')
df['Address'] = df['Address'].str.replace('500 Soraparu St. New Orleans, La 70130, New Orleans, LA, 70130', '500 Soraparu St., New Orleans, LA, 70130')
df['Address'] = df['Address'].str.replace('2162 Mountain Blvd, Oakland CA 94611, Oakland, CA, 94605', '2162 Mountain Blvd, Oakland, CA, 94611')
df['Address'] = df['Address'].str.replace('13231 N. 22nd St. Phoenix, AZ 85022, Phoenix, AZ, 85028', '13231 N. 22nd St., Phoenix, AZ, 85022')
df['Address'] = df['Address'].str.replace('14124 SE Mill St, Portland OR 97233, Portland, OR, 97266', '14124 SE Mill St, Portland, OR, 97233')
df['Address'] = df['Address'].str.replace('555 Double Eagle Ct. Suite 2000 Reno, NV 89521 , Reno, NV, 89521', '555 Double Eagle Ct., Suite 2000, Reno, NV, 89521')
df['Address'] = df['Address'].str.replace('3422 Rustin Ave Riverside, CA 92507, Riverside, CA, 92504', '3422 Rustin Ave, Riverside, CA, 92507')
df['Address'] = df['Address'].str.replace('2800 19th Stree NW Rochester, MN 55901, Rochester, MN, 55902', '2800 19th Stree NW, Rochester, MN, 55901')
df['Address'] = df['Address'].str.replace('9510 Carmel Mountain Road, San Diego CA 92129, San Diego, CA, 92129', '9510 Carmel Mountain Road, San Diego CA, 92129')
df['Address'] = df['Address'].str.replace('6540 Flanders Drive. San Diego, CA 92121, San Diego, CA, 92127', '6540 Flanders Drive., San Diego, CA, 92121')
df['Address'] = df['Address'].str.replace('725 Washington St. Santa Clara, Ca 95050, Santa Clara, CA, 95050', '725 Washington St., Santa Clara, CA, 95050')
df['Address'] = df['Address'].str.replace('6715 S Boe Lane Sioux Falls, SD 57108, Sioux Falls, SD, 57105', '6715 S Boe Lane, Sioux Falls, SD, 57108')
df['Address'] = df['Address'].str.replace('12611 N. Wilson St. Mead, WA 99021, Spokane, WA, 99218', '12611 N. Wilson St., Mead, WA, 99021')
df['Address'] = df['Address'].str.replace('1450 Newfield Avenue Stamford, CT 06905, Stamford, CT, 06905', '1450 Newfield Avenue, Stamford, CT, 06905')
df['Address'] = df['Address'].str.replace('2525 Gold Brook Dr Stockton, CA 95212, Stockton, CA, 95212', '2525 Gold Brook Dr, Stockton, CA, 95212')
df['Address'] = df['Address'].str.replace('1112 North G Street | Tacoma, WA 98403, Tacoma, WA, 98403', '1112 North G Street, Tacoma, WA, 98403')
df['Address'] = df['Address'].str.replace('1250 Erbes Rd. Thousand Oaks, CA 91362, Thousand Oaks, CA, 91302', '1250 Erbes Rd., Thousand Oaks, CA, 91362')
df['Address'] = df['Address'].str.replace('3201 N. Eastman Rd. Longview, TX 75605, Tyler, TX, 75799', '3201 N. Eastman Rd., Longview, TX, 75605')
df['Address'] = df['Address'].str.replace('St. Catherine of Siena School, 3460 Tennessee Street, Vallejo, CA, 94591', '3460 Tennessee Street, Vallejo, CA, 94591')
df['Address'] = df['Address'].str.replace('1650 Godfrey Ave. Wyoming,Mi 49509 , Wyoming, MI, 49509', '1650 Godfrey Ave., Wyoming, MI, 49509' )
df['Address'] = df['Address'].str.replace('3422 Rustin Ave Riverside, CA 92507', '3422 Rustin Ave, Riverside, CA, 92507')
df['Address'] = df['Address'].str.replace('San Martin De Porres Clinic: Kallumadanda Vinnie MD Mission, TX 78572', 'San Martin De Porres Clinic: Kallumadanda Vinnie MD, Mission, TX, 78572') # 33396
df['Address'] = df['Address'].str.replace('Rockwood Plastic Surgery Center: Gardner Glenn P MD Spokane, WA 99204', 'Rockwood Plastic Surgery Center: Gardner Glenn P MD, Spokane, WA, 99204' ) # 50841
df['Address'] = df['Address'].str.replace('2950 East 29th Street, Long Beach, CA', '2950 E 29th St, Long Beach, CA, 90806')
df['Address'] = df['Address'].str.replace('2585 Business Park Drive, Vista, 92081', '2585 Business Park Dr, Vista, CA, 92081')
df['Address'] = df['Address'].str.replace('401 E Arrowood Rd, Charlotte, Nc', '401 E Arrowood Rd, Charlotte, NC, 28217')
df['Address'] = df['Address'].str.replace('2900 Barberry Avenue, Columbia, Missouri 65202', '2900 Barberry Avenue, Columbia, MO, 65202')
df['Address'] = df['Address'].str.replace('2572 John F Kennedy Boulevard, Jersey City, New Jersey 07304', '2572 John F Kennedy Boulevard, Jersey City, NJ, 07304')
df['Address'] = df['Address'].str.replace('4656 N. Rancho Drive, Las Vegas, Nevada 89130', '4656 N. Rancho Drive, Las Vegas, NV, 89130')
df['Address'] = df['Address'].str.replace('6415 SE Morrison street, Portland, Oregon 97215', '6415 SE Morrison Street, Portland, OR, 97215')
df['Address'] = df['Address'].str.replace('2120 21st Avenue South, Seattle, Washington 98144', '2120 21st Avenue South, Seattle, WA, 98144')
df['Address'] = df['Address'].str.replace('4025 N. Hartford Ave., Tulsa, OK. 74106', '4025 N. Hartford Ave., Tulsa, OK, 74106')
df['Address'] = df['Address'].str.replace('6355 Willowbrook St., Wichita, Ks 67208', '6355 Willowbrook St., Wichita, KS, 67208')
df['Address'] = df['Address'].str.replace('4314 clarno dr, austin, TX 78749', '4314 Clarno Dr, Austin, TX 78749')
df['Address'] = df['Address'].str.replace('Suite 117', 'Suite 117,')
# specific
df.at[52126, 'Address'] = '1112 North G Street, Tacoma, WA, 98403'
df.at[46311, 'Address'] = '5531 Cancha de Golf Ste 202, Rancho Santa Fe, CA, 92091'
df.at[56607, 'Address'] = '4880 MacArthur Blvd. NW, Washington, DC, 20007'
df.at[27205, 'Address'] = '1018 Harding Street, Suite 112, Lafayette, LA, 70503'
df.at[50525, 'Address'] = '8740 Asheville Hwy, Spartanburg, SC, 29316'
df.at[397, 'Address'] = '1075 New Scotland Road, Albany, NY, 12208'
df.at[8207, 'Address'] = '3500 Cleveland Avenue NW, Canton, OH, 44709'
df.at[8292, 'Address'] = '231 Del Prado Blvd. S, Cape Coral, FL, 33990'
df.at[11542, 'Address'] = '1320 South Fairview Road, Columbia MO, 65203'
df.at[18372, 'Address'] = '7005 Woodbine Ave, Sacramento, CA, 95822'
df.at[19249, 'Address'] = '4424 Innovation Drive, Fort Collins, CO, 80525'
df.at[21626, 'Address'] = '1130 Eliza St.,, Green Bay, WI, 54301'
df.at[38985, 'Address'] = '2211 Saint Andrews Blvd., Panama City FL, 32405'
df.at[42682, 'Address'] = '5510 Munford Road, Raleigh NC, 27612'
df.at[46031, 'Address'] = '2850 Logan Ave, San Diego, CA, 92113'
df.at[46285, 'Address'] = '9510 Carmel Mountain Road, San Diego, CA, 92129'
df.at[54169, 'Address'] = '3535 West Messala Way, Tucson, AZ, 85746'
df.at[56231, 'Address'] = '2200 Minnesota Av. SE Washington DC, 20020'
df.at[56603, 'Address'] = '3328 Martin Luther King Junior Avenue Southeast, Washington DC, 20032'
df.at[10584, 'Address'] = '3375 W. 99th Street, Cleveland, OH, 44102'
df.at[11668, 'Address'] = '220 Stoneridge Drive, Suite 403, Columbia, SC, 29210'
df.at[23334, 'Address'] = '5300 El Camino Road, Las Vegas, NV, 89118'
df.at[34536, 'Address'] = '3000 53rd St SW, Naples, FL, 34116'
df.at[36778, 'Address'] = '2162 Mountain Blvd, Oakland, CA, 94611'
df.at[41320, 'Address'] = '6415 SE Morrison Street, Portland, OR, 97215'
df.at[42400, 'Address'] = '555 Double Eagle Ct., Suite 2000, Reno, NV, 89521'
df.at[49117, 'Address'] = '12351 8th Ave NE, Seattle, WA, 98125'
df.at[49183 , 'Address'] = '2120 21st Avenue South, Seattle, WA, 98144'
df.at[56231, 'Address'] = '2200 Minnesota Av. SE, Washington, DC, 20020'
df.at[11542, 'Address'] = '1320 South Fairview Road, Columbia, MO, 65203'
df.at[38985, 'Address'] = '2211 Saint Andrews Blvd., Panama City, FL, 32405'
df.at[42682, 'Address'] = '5510 Munford Road, Raleigh, NC, 27612'
df.at[56603, 'Address'] = '3328 Martin Luther King Junior Avenue Southeast, Washington, DC, 20032'
df['Address'] = df['Address'].str.replace('Washington, DC, Washington, DC,', 'Washington, DC,')
df['Address'] = df['Address'].str.replace('New Orleans, LA, New Orleans, LA,', 'New Orleans, LA,')
df['Address'] = df['Address'].str.replace('Albuquerque, NM, Albuquerque, NM,', 'Albuquerque, NM,' )
df['Address'] = df['Address'].str.replace('Chelsea, MA, Boston, MA,', 'Chelsea, MA,' )
df['Address'] = df['Address'].str.replace('Franklin, TN, Franklin, TN,', 'Franklin, TN,')
df['Address'] = df['Address'].str.replace('Hales Corners, WI, Milwaukee, WI', 'Hales Corners, WI,') # 50525
df['Address'] = df['Address'].str.replace('Albany NY', 'Albany, NY,' )
df['Address'] = df['Address'].str.replace('Prineville OR', 'Prineville, OR,')
df['Address'] = df['Address'].str.replace('Lancaster PA', 'Lancaster, PA,')
df['Address'] = df['Address'].str.replace('Portland OR', 'Portland, OR,')
df['Address'] = df['Address'].str.replace('San Diego CA', 'San Diego, CA,')
df['Address'] = df['Address'].str.replace('austin', 'Austin')
df['Address'] = df['Address'].str.replace('milwaukee', 'Milwaukee')
df['Address'] = df['Address'].str.replace('greeley', 'Greeley')
df['Address'] = df['Address'].str.replace('Oklahoma city', 'Oklahoma City')
df['Address'] = df['Address'].str.replace('CARMEL', 'Carmel')
df['Address'] = df['Address'].str.replace('COLORADO SPRINGS', 'Colorado Springs')
df['Address'] = df['Address'].str.replace('GREENSBORO', 'Greensboro')
df['Address'] = df['Address'].str.replace('SAN DIEGO', 'San Diego')
df['Address'] = df['Address'].str.replace('Cherry Hill/Baltimore', 'Cherry Hill')
df['Address'] = df['Address'].str.replace('AL ', 'AL, ')
df['Address'] = df['Address'].str.replace('AK ', 'AK, ')
df['Address'] = df['Address'].str.replace('AR ', 'AR, ')
df['Address'] = df['Address'].str.replace('AZ ', 'AZ, ')
df['Address'] = df['Address'].str.replace('CA ', 'CA, ')
df['Address'] = df['Address'].str.replace('CO ', 'CO, ')
df['Address'] = df['Address'].str.replace('CT ', 'CT, ')
df['Address'] = df['Address'].str.replace('DE ', 'DE, ')
df['Address'] = df['Address'].str.replace('DC ', 'DC, ')
df['Address'] = df['Address'].str.replace('FL ', 'FL, ')
df['Address'] = df['Address'].str.replace('GA ', 'GA, ')
df['Address'] = df['Address'].str.replace('HI ', 'HI, ')
df['Address'] = df['Address'].str.replace('IA ', 'IA, ')
df['Address'] = df['Address'].str.replace('ID ', 'ID, ')
df['Address'] = df['Address'].str.replace('IL ', 'IL, ')
df['Address'] = df['Address'].str.replace('IN ', 'IN, ')
df['Address'] = df['Address'].str.replace('KS ', 'KS, ')
df['Address'] = df['Address'].str.replace('KY ', 'KY, ')
df['Address'] = df['Address'].str.replace('LA ', 'LA, ')
df['Address'] = df['Address'].str.replace('MA ', 'MA, ')
df['Address'] = df['Address'].str.replace('MD ', 'MD, ')
df['Address'] = df['Address'].str.replace('ME ', 'ME, ')
df['Address'] = df['Address'].str.replace('MI ', 'MI, ')
df['Address'] = df['Address'].str.replace('MN ', 'MN, ')
df['Address'] = df['Address'].str.replace('MO ', 'MO, ')
df['Address'] = df['Address'].str.replace('MS ', 'MS, ')
df['Address'] = df['Address'].str.replace('MT ', 'MT, ')
df['Address'] = df['Address'].str.replace('NC ', 'NC, ')
df['Address'] = df['Address'].str.replace('ND ', 'ND, ')
df['Address'] = df['Address'].str.replace('NH ', 'NH, ')
df['Address'] = df['Address'].str.replace('NJ ', 'NJ, ')
df['Address'] = df['Address'].str.replace('NM ', 'NM, ')
df['Address'] = df['Address'].str.replace('NV ', 'NV, ')
df['Address'] = df['Address'].str.replace('NY ', 'NY, ')
df['Address'] = df['Address'].str.replace('OH ', 'OH, ')
df['Address'] = df['Address'].str.replace('OK ', 'OK, ')
df['Address'] = df['Address'].str.replace('OR ', 'OR, ')
df['Address'] = df['Address'].str.replace('PA ', 'PA, ')
df['Address'] = df['Address'].str.replace('RI ', 'RI, ')
df['Address'] = df['Address'].str.replace('SC ', 'SC, ')
df['Address'] = df['Address'].str.replace('SD ', 'SD, ')
df['Address'] = df['Address'].str.replace('TN ', 'TN, ')
df['Address'] = df['Address'].str.replace('TX ', 'TX, ')
df['Address'] = df['Address'].str.replace('UT ', 'UT, ')
df['Address'] = df['Address'].str.replace('VA ', 'VA, ')
df['Address'] = df['Address'].str.replace('VT ', 'VT, ')
df['Address'] = df['Address'].str.replace('WA ', 'WA, ')
df['Address'] = df['Address'].str.replace('WI ', 'WI, ')
df['Address'] = df['Address'].str.replace('WV ', 'WV, ')
df['Address'] = df['Address'].str.replace('WY ', 'WY, ')
###Output
_____no_output_____
###Markdown
Split Address Column- look for more discrepancies Create lengths to find discrepancies in 'Address' column
###Code
df['City, State'] = df['Address'].str.split(',')
# Finding length because there are anomalies with the information in the address column
df['Length'] = df['City, State'].apply(lambda x: len(x) if x != None else 0 )
# 4 is the expected length
df['Length'].unique()
###Output
_____no_output_____
###Markdown
Create new dataframes for different lengths Length 1
###Code
# No Address - removing from df
df = df[df['Length'] != 1]
###Output
_____no_output_____
###Markdown
Length 7- https://stackoverflow.com/questions/6266727/python-cut-off-the-last-word-of-a-sentence- https://towardsdatascience.com/a-really-simple-way-to-edit-row-by-row-in-a-pandas-dataframe-75d339cbd313
###Code
df.loc[df['Length'] == 7]
for index in df.index:
if df.loc[index, 'Length'] == 7:
content = df.loc[index, 'Address']
df.loc[index, 'Address'] = ', '.join(content.split(', ')[:-3])
###Output
_____no_output_____
###Markdown
Length 8
###Code
df.loc[df['Length'] == 8]
for index in df.index:
if df.loc[index, 'Length'] == 8:
content = df.loc[index, 'Address']
df.loc[index, 'Address'] = ', '.join(content.split(', ')[:-3])
###Output
_____no_output_____
###Markdown
Check
###Code
df = df.drop(columns = ['City, State', 'Length'])
df['City, State'] = df['Address'].str.split(',')
# Checking string lengths after cleaning
df['Length'] = df['City, State'].apply(lambda x: len(x) if x != None else 0 )
df['Length'].unique()
###Output
_____no_output_____
###Markdown
Create City, State columns
###Code
df['City'] = df['City, State'].str[-3]
df['State'] = df['City, State'].str[-2]
###Output
_____no_output_____
###Markdown
Check Unique Cities
###Code
print(df['City'].nunique())
df['City'].unique()
###Output
396
###Markdown
Check Unique States
###Code
print(df['State'].nunique())
df['State'].unique()
df.loc[df['State'] == '']
df.at[32718, 'Address'] = '5425 S. 111th Street, Hales Corners, WI, 53222'
df.at[32718, 'State'] = 'WI'
df.at[32718, 'City'] = 'Hales Corners'
###Output
_____no_output_____
###Markdown
Update School Score- change to int so data can be sorted by the value
###Code
df['Score'] = df['Score'].str.replace('/10', '')
df['Score'] = df['Score'].astype(int)
###Output
_____no_output_____
###Markdown
Separating into PK, K, Elementary, Middle, High School- https://stackoverflow.com/questions/61877712/check-if-an-item-in-a-list-is-available-in-a-column-which-is-of-type-list
###Code
def parse_grades(grades_string):
GRADES = ['PK', 'K', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', 'Ungraded']
# Remove & for grades list
grades_string = grades_string.replace(' &', ',')
# Grades list - will add to separated grade string to grades
grades = []
# split strings based on ','
string_list = grades_string.split(',')
# look for sections of list with '-'
dash = "-"
for i in range(len(string_list)):
clean_string = string_list[i].strip()
if dash in clean_string:
# split using '-', loop and add to grades variable
start_grade, end_grade = clean_string.split(dash)
grades += GRADES[GRADES.index(start_grade) : GRADES.index(end_grade)+ 1]
else:
# add string to grades
grades.append(clean_string)
return grades
print(df['Grades'].nunique())
unique_grades_combination = df['Grades'].unique()
def test_complete_dataset(unique_grades_combination):
# create a loop that goes thru dataset and invoke parse_grades with each element
separated_grades_list = []
for i in unique_grades_combination:
separated_grades_list.append(parse_grades(i))
dictionary_grade_list = dict(zip(unique_grades_combination, separated_grades_list))
return dictionary_grade_list
dictionary = test_complete_dataset(unique_grades_combination)
df['Clean_Grades'] = df['Grades'].map(dictionary)
high_school = ['9', '10', '11', '12']
middle_school = ['6', '7', '8']
elementary = ['K', '1', '2', '3', '4', '5']
pre_k = ['PK']
set1 = set(high_school)
df['High School (9-12)'] = df['Clean_Grades'].apply(lambda x: any([k in x for k in set1]))
set2 = set(middle_school)
df['Middle School (6-8)'] = df['Clean_Grades'].apply(lambda x: any([k in x for k in set2]))
set3 = set(elementary)
df['Elementary (K-5)'] = df['Clean_Grades'].apply(lambda x: any([k in x for k in set3]))
set4 = set(pre_k)
df['Pre-Kindergarten (PK)'] = df['Clean_Grades'].apply(lambda x: any([k in x for k in set4]))
df[['High School (9-12)', 'Middle School (6-8)', 'Elementary (K-5)', 'Pre-Kindergarten (PK)']] = df[['High School (9-12)', 'Middle School (6-8)', 'Elementary (K-5)', 'Pre-Kindergarten (PK)']] * 1
df['Grades'] = df['Grades'].str.replace(' & Ungraded', '')
###Output
_____no_output_____ |
notebooks/Basic experiment tools dev.ipynb | ###Markdown
Section II. Dask image parallelization dev notebookCreated on: Monday March 28th, 2022 Created by: Jacob Alexander Rose
###Code
# %%bash
# !export OMP_NUM_THREADS=1
# export MKL_NUM_THREADS=1
# export OPENBLAS_NUM_THREADS=1
# echo '${OMP_NUM_THREADS}'
# import dask
# @dask.delayed
# def load(filename):
# ...
# @dask.delayed
# def process(data):
# ...
# @dask.delayed
# def save(data):
# ...
# def f(filenames):
# results = []
# for filename in filenames:
# data = load(filename)
# data = process(data)
# result = save(data)
# return results
# dask.compute(f(filenames))
# source: https://examples.dask.org/machine-learning/torch-prediction.html
from typing import *
import glob
import toolz
import dask
import dask.array as da
import torch
from torchvision import transforms
from PIL import Image
import pandas as pd
pd.set_option("display.max_colwidth", 150)
import numpy as np
from imutils.ml.data.datamodule import Herbarium2022DataModule, Herbarium2022Dataset
# @dask.delayed
# def transform(img):
# trn = transforms.Compose([
# transforms.Resize(256),
# transforms.CenterCrop(224),
# transforms.ToTensor(),
# transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
# ])
# return trn(img)
import os
from pathlib import Path
import dask
import dask.dataframe as dd
#####################################################
@dask.delayed
def load(path: str,
fs=__builtins__):
with fs.open(path, 'rb') as f:
img = Image.open(f).convert("RGB")
return img
@dask.delayed
def process(img: Image.Image,
size: Tuple[int]):
img = img.resize(size=size,
resample=Image.BICUBIC)
return img
@dask.delayed
def save(img: Image.Image,
target_path: str,
fs=__builtins__):
with fs.open(target_path, 'wb') as f:
img = Image.save(f, format="jpeg")
return os.path.isfile(target_path)
def run(data_chunk: pd.DataFrame):
results = []
for filename in filenames:
data = load(filename)
data = process(data)
result = save(data)
return results
# dask.compute(f(filenames))
from rich import print as pp
from dataclasses import dataclass, field
@dataclass
class Config:
source_root_dir: Path = Path('/media/data_cifs/projects/prj_fossils/data/raw_data/herbarium-2022-fgvc9_resize')
target_root_dir_template: Path = Path('/media/data_cifs/projects/prj_fossils/data/raw_data/herbarium-2022-fgvc9_resize')
target_resolution: int = 512
target_root_dir: str = field(init=False)
def __post_init__(self):
self.target_root_dir = Path(f"{str(self.target_root_dir_template)}-{self.target_resolution}")
os.makedirs(self.target_root_dir, exist_ok=True)
def get_target_path(self, source_path: Path) -> Path:
"""
Finds the source path's location relative to the source root, and returns a new path at the same location relative to the target root.
- source and target root dirs are specified at instantiation of config, must update instance attributes in order to chaange this method.
"""
return str(self.target_root_dir / Path(source_path).relative_to(self.source_root_dir))
def process_full_dataframe(self, data_df: pd.DataFrame) -> pd.DataFrame:
"""
Prepare dataframe for large-scale image file processing.
Creates a `target_path` column in data_df and fills it with values produced by self.get_target_path, then renames is as `path` while renaming the original colmn `path` to be `source_path`.
"""
data_df = data_df.assign(target_path = data_df.path.apply(self.get_target_path, meta=("target_path", "string")))
data_df = data_df.rename(columns={"path":"source_path",
"target_path":"path"})
data_df = data_df.sort_index()
return data_df
def read_dask_dataframe_from_csv(self,
csv_path: str,
columns: List[str],
col_dtypes: Dict[str, Any]) -> dd.DataFrame:
data_df = dd.read_csv(csv_path, usecols=["Unnamed: 0", *columns], dtype=col_dtypes
).rename(columns={"Unnamed: 0":"idx"})
data_df = data_df.set_index("idx")
data_df = data_df.repartition(16)
return data_df
cfg = Config()
pp(cfg)
catalog_dir = "/media/data/jacob/GitHub/image-utils/imutils/big/data"
train_csv_path = Path(catalog_dir, "train_metadata.csv")
test_csv_path = Path(catalog_dir, "test_metadata.csv")
train_columns = ['path', 'image_id',
'category_id', 'genus_id', 'scientificName',
'Species', 'institution_id',
'family', 'genus', 'species',
'file_name', 'collectionCode']
train_col_dtypes = {'path':"string",
'image_id':"string",
'category_id': "category",
'genus_id': "category",
'scientificName': "category",
'Species': "category",
'institution_id': "category",
'family': "category",
'genus': "category",
'species': "category",
'file_name': "string",
'collectionCode': "category"}
test_columns = ['path', 'image_id', 'file_name']
test_col_dtypes = {'path':"string",
'image_id':"string",
'file_name': "string"}
train_df = cfg.read_dask_dataframe_from_csv(csv_path=train_csv_path,
columns=train_columns,
col_dtypes=train_col_dtypes)
test_df = cfg.read_dask_dataframe_from_csv(csv_path=test_csv_path,
columns=test_columns,
col_dtypes=test_col_dtypes)
train_df.head()
test_df.head()
# train_df = dd.read_csv(train_csv_path, usecols=["Unnamed: 0", *train_columns], dtype=train_col_dtypes
# ).rename(columns={"Unnamed: 0":"idx"})
# train_df = train_df.set_index("idx")
# train_df = train_df.repartition(16)
# train_df.head()
# data_df = data_df.assign(target_path = data_df.path.apply(cfg.get_target_path, meta=("target_path", "string")))
# data_df = data_df.rename(columns={"path":"source_path",
# "target_path":"path"})
# data_df = data_df.sort_index()
seed = 85
random_state = np.random.RandomState(seed=seed)
# test_df = dd.read_csv(train_csv_path, index=0)
# train_df = pd.read_csv(train_csv_path, index_col=0, usecols=train_columns, dtype=train_col_dtypes)
# train_df.describe(include='all')
test_df = dd.read_csv(test_csv_path, usecols=["Unnamed: 0", *test_columns], dtype=test_col_dtypes
).rename(columns={"Unnamed: 0":"idx"})
test_df = test_df.set_index("idx")
test_df = test_df.repartition(16)
test_df.head()
%%time
train_df = dd.read_csv(train_csv_path, usecols=["Unnamed: 0", *train_columns], dtype=train_col_dtypes
).rename(columns={"Unnamed: 0":"idx"})
# ).set_index("idx")
train_df = train_df.set_index("idx")
train_df = train_df.repartition(16)
train_df.head()
###Output
CPU times: user 10.5 s, sys: 1.57 s, total: 12.1 s
Wall time: 14.2 s
###Markdown
Take a small fraction of dataset for testing
###Code
data_df = train_df.sample(frac=0.001,
replace=False,
random_state=random_state)
data_df.visualize()
for batch in data_df.itertuples():
break
print(type(batch))
print(dir(batch))
batch.scientificName
batch.category_id
from IPython.display import display
display(batch)
!ls -alh '/media/data_cifs/projects/prj_fossils/data/raw_data/herbarium-2022-fgvc9_resize'
df = data_df.compute()
df
data_df.head(2).path.apply(lambda x: Path(x).parent.parent.parent.parent)
df = data_df.loc[0,:].persist()
df.persist()
topk = 2
ckpt_dir = "/media/data_cifs/projects/prj_fossils/users/jacob/experiments/2022/herbarium2022/hydra_experiments/2022-03-28/09-32-27/ckpts/"
# ckpt_paths = [os.path.join(ckpt_dir, file_path) for file_path in sorted(os.listdir(ckpt_dir), reverse=True)][:2]
# pp(ckpt_paths)
ckpt_path = "/media/data_cifs/projects/prj_fossils/users/jacob/experiments/2022/herbarium2022/hydra_experiments/2022-03-28/09-32-27/ckpts/epoch=10-val_loss=1.901-val_macro_F1=0.567/model_weights.ckpt"
# jrose/herbarium2022/2up1al9o
import os
import wandb
artifact = wandb.Artifact("model-weights", "checkpoints")
# Add Files and Assets to the artifact using
# `.add`, `.add_file`, `.add_dir`, and `.add_reference`
artifact.add_file(ckpt_path)
artifact.save()
os.environ["WANDB_PROJECT"]="herbarium2022"
!set | grep WANDB
api = wandb.Api()
# run = api.run(overrides=dict(entity="jrose", project="herbarium2022", run="2up1al9o"))
run = api.run("herbarium2022/2up1al9o")
print(run)
run.upload_file(ckpt_path)
# for path in ckpt_paths:
# print(f"Uploading file to wandb: {path}")
# run.upload_file(path)
# run = wandb.init(project=PROJECT_NAME, resume=True)
run.finish
# Herbarium2022DataModule,
catalog_dir = "/media/data/jacob/GitHub/image-utils/imutils/big/data"
data = Herbarium2022Dataset(catalog_dir=catalog_dir, subset="train", transform=transform)
###Output
_____no_output_____
###Markdown
Download from wandb the best resnext50_4x30d or w/e from Experiment 18 2022-03-28
###Code
import wandb
run = wandb.init()
artifact = run.use_artifact('jrose/herbarium2022/model-weights:v8', type='checkpoints')
artifact_dir = artifact.download()
###Output
_____no_output_____
###Markdown
etc
###Code
data_df.loc[:5,:]
%%time
demo = data_df.loc[:10,:].path.apply(cfg.get_target_path, meta=("target_path", "string"))
demo
# demo.compute()
demo = demo.persist()
demo
demo.shape[0].compute()
dir(demo)
data_df = data_df.assign(target_path = data_df.path.apply(cfg.get_target_path, meta=("target_path", "string")))
data_df.head(10)
# data_df = data_df.persist()
print(data_df.shape[0].compute())
# data_df = data_df.persist()
print(train_df.shape[0].compute())
839772/836
train_df.visualize()
def partition_info(data):
print(f"type(data): {type(data)}")
print(f"data.shape: {data.shape}")
result = train_df.map_partitions(partition_info)
# train_ddf = train_df.to_delayed()
result.compute()
train_ddf
%time train_df.head()
import torch
torch.cat?
# decoded_targets = data.get_decoded_targets()
# decoded_targets
paths = data.paths.sample(100).values.tolist()
# paths
import sys
from PIL import Image
from tqdm import tqdm
for infile in tqdm(paths):
try:
with Image.open(infile) as im:
print(infile, im.format, f"{im.size}x{im.mode}")
except OSError:
pass
data[0]
###Output
_____no_output_____
###Markdown
prefect
###Code
import datetime
import os
import prefect
from prefect import task
from prefect.engine.signals import SKIP
from prefect.tasks.shell import ShellTask
@task
def curl_cmd(url: str, fname: str) -> str:
"""
The curl command we wish to execute.
"""
if os.path.exists(fname):
raise SKIP("Image data file already exists.")
return "curl -fL -o {fname} {url}".format(fname=fname, url=url)
# ShellTask is a task from the Task library which will execute a given command in a subprocess
# and fail if the command returns a non-zero exit code
download = ShellTask(name="curl_task", max_retries=2, retry_delay=datetime.timedelta(seconds=10))
###Output
_____no_output_____
###Markdown
etc
###Code
objs = [load(x) for x in glob.glob("hymenoptera_data/val/*/*.jpg")]
To load the data from cloud storage, say Amazon S3, you would use
import s3fs
fs = s3fs.S3FileSystem(...)
objs = [load(x, fs=fs) for x in fs.glob(...)]
tensors = [transform(x) for x in objs]
batches = [dask.delayed(torch.stack)(batch)
for batch in toolz.partition_all(10, tensors)]
batches[:5]
@dask.delayed
def predict(batch, model):
with torch.no_grad():
out = model(batch)
_, predicted = torch.max(out, 1)
predicted = predicted.numpy()
return predicted
Moving the model around¶
PyTorch neural networks are large, so we don’t want to repeat it many times in our task graph (once per batch).
import pickle
dask.utils.format_bytes(len(pickle.dumps(model)))
'44.80 MB'
Instead, we’ll also wrap the model itself in dask.delayed. This means the model only shows up once in the Dask graph.
Additionally, since we performed fine-tuning in the above (and that runs on a GPU if its available), we should move the model back to the CPU.
dmodel = dask.delayed(model.cpu()) # ensuring model is on the CPU
Now we’ll use the (delayed) predict method to get our predictions.
predictions = [predict(batch, dmodel) for batch in batches]
dask.visualize(predictions[:2])
predictions = dask.compute(*predictions)
predictions
###Output
_____no_output_____
###Markdown
Scratch
###Code
# import wandb
# import os
from pytorch_lightning import utilities #.rank_zero import rank_zero_only
# import wandb
# import os
from pytorch_lightning import plugins #, utilities #.rank_zero import rank_zero_only
dir(plugins)
os.path.isfile("/media/data_cifs/projects/prj_fossils/users/jacob/experiments/2022/herbarium2022/hydra_experiments/2022-03-28/00-59-52/ckpts/epoch=00-val_loss=10.447-val_macro_F1=0.000/model_weights.ckpt")
os.listdir(os.path.dirname("/media/data_cifs/projects/prj_fossils/users/jacob/experiments/2022/herbarium2022/hydra_experiments/2022-03-28/00-59-52/ckpts/epoch=00-val_loss=10.447-val_macro_F1=0.000/model_weights.ckpt"))
ckpts = wandb.Artifact("experiment-ckpts", type="checkpoints")
ckpt = "/media/data_cifs/projects/prj_fossils/users/jacob/experiments/2022/herbarium2022/hydra_experiments/2022-03-28/00-18-52/ckpts/epoch=00-val_loss=22.030-val_macro_F1=0.000.ckpt"
ckpts.add_file(ckpt)#trainer.checkpoint_callback.best_model_path)
exp.use_artifact(ckpts)
###Output
_____no_output_____
###Markdown
Section I. Basic experiment tools dev notebookCreated on: Tuesday March 22nd, 2022 Created by: Jacob Alexander Rose
###Code
%load_ext autoreload
%autoreload 2
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
from IPython.display import display
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
import pandas as pd
from pathlib import Path
from icecream import ic
from rich import print as pp
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
# from imutils.big.datamodule import Herbarium2022DataModule, Herbarium2022Dataset
from imutils.ml.data.datamodule import Herbarium2022DataModule, Herbarium2022Dataset
from imutils.ml.utils.etl_utils import ETL
import pytorch_lightning as pl
from torchvision import transforms as T
import argparse
import imutils
from hydra.experimental import compose, initialize, initialize_config_dir
import hydra
from omegaconf import DictConfig, OmegaConf
from typing import *
251932/256
###Output
_____no_output_____
###Markdown
helper display func
###Code
def display_train_timing_info(batches_per_epoch: int,
batches_per_second: float,
batch_size: int):
samples_per_epoch = batches_per_epoch*batch_size
seconds_per_epoch = batches_per_epoch * batches_per_second
min_per_epoch = seconds_per_epoch / 60
hrs_per_epoch = min_per_epoch / 60
samples_per_second = batches_per_second * batch_size
batches_per_min = batches_per_second * 60
batches_per_hr = batches_per_min * 60
samples_per_min = samples_per_second * 60
samples_per_hr = samples_per_min * 60
pp([f"seconds_per_epoch: {seconds_per_epoch:>,}",
f"min_per_epoch: {min_per_epoch:.4f}",
f"hrs_per_epoch: {hrs_per_epoch:.4f}",
f"epochs_per_second: {1/seconds_per_epoch:.4f}",
f"epochs_per_min: {1/min_per_epoch:.4f}",
f"epochs_per_hr: {1/hrs_per_epoch:.4f}",
f"batches_per_epoch: {batches_per_epoch:.4g}",
f"samples_per_epoch: {samples_per_epoch:.4g}",
f"seconds_per_batch: {1/batches_per_second:.4f}",
f"batches_per_second: {batches_per_second:.4f}",
f"batches_per_min: {batches_per_min:.4f}",
f"batches_per_hr: {batches_per_hr:.4f}",
f"samples_per_second: {samples_per_second:.4f}",
f"samples_per_min: {samples_per_min:.4f}",
f"samples_per_hr: {samples_per_hr:.4g}",
f"batch_size: {batch_size}"])
###Output
_____no_output_____
###Markdown
Experiment 2
###Code
batches_per_second = (1/1.7)
batches_per_epoch = 4374
batch_size=48
print(f"Experiment #2: batch_size={batch_size}, num_processes=4, num_devices=2")
print("Using 50% of samples")
display_train_timing_info(batches_per_epoch=batches_per_epoch,
batches_per_second=batches_per_second,
batch_size=batch_size)
print("(Extrapolated prediction) Using 100% of samples")
display_train_timing_info(batches_per_epoch=batches_per_epoch*2,
batches_per_second=batches_per_second,
batch_size=batch_size)
###Output
Experiment #2: batch_size=48, num_processes=4, num_devices=2
Using 50% of samples
###Markdown
Experiment 3
###Code
batches_per_second = (1/2.15)
batches_per_epoch = 3280
batch_size=64
print(f"Experiment #3: batch_size=64, num_processes=4, num_devices=2")
print("Using 50% of samples")
display_train_timing_info(batches_per_epoch=batches_per_epoch,
batches_per_second=batches_per_second,
batch_size=batch_size)
print("(Extrapolated prediction) Using 100% of samples")
display_train_timing_info(batches_per_epoch=batches_per_epoch*2,
batches_per_second=batches_per_second,
batch_size=batch_size)
###Output
Experiment #3: batch_size=64, num_processes=4, num_devices=2
Using 50% of samples
###Markdown
Experiment 4
###Code
batches_per_second = (1/3.3)
batches_per_epoch = 2187
batch_size=96
print(f"Experiment #4: batch_size={batch_size}, num_processes=4, num_devices=2")
print("Using 50% of samples")
display_train_timing_info(batches_per_epoch=batches_per_epoch,
batches_per_second=batches_per_second,
batch_size=batch_size)
print("(Extrapolated prediction) Using 100% of samples")
display_train_timing_info(batches_per_epoch=batches_per_epoch*2,
batches_per_second=batches_per_second,
batch_size=batch_size)
###Output
Experiment #4: batch_size=96, num_processes=4, num_devices=2
Using 50% of samples
###Markdown
Experiment 5Started: 3:15 AM - 2022-03-23 Ended: 4:30 AM - 2022-03-23
###Code
batches_per_second = (1/4.3)
batches_per_epoch = 1640
batch_size=128
print(f"Experiment #5: batch_size={batch_size}, num_processes=4, num_devices=2")
print("Using 50% of samples")
display_train_timing_info(batches_per_epoch=batches_per_epoch,
batches_per_second=batches_per_second,
batch_size=batch_size)
print("(Extrapolated prediction) Using 100% of samples")
display_train_timing_info(batches_per_epoch=batches_per_epoch*2,
batches_per_second=batches_per_second,
batch_size=batch_size)
###Output
Experiment #5: batch_size=128, num_processes=4, num_devices=2
Using 50% of samples
###Markdown
Experiment 6Started: 4:30 AM - 2022-03-23 Ended: 5:45 AM - 2022-03-23
###Code
batches_per_second = (1/5.15)
batches_per_epoch = 1458
batch_size=144
print(f"Experiment #6: batch_size={batch_size}, num_processes=4, num_devices=2")
print("Using 50% of samples")
display_train_timing_info(batches_per_epoch=batches_per_epoch,
batches_per_second=batches_per_second,
batch_size=batch_size)
print("(Extrapolated prediction) Using 100% of samples")
display_train_timing_info(batches_per_epoch=batches_per_epoch*2,
batches_per_second=batches_per_second,
batch_size=batch_size)
batches_per_second = (1/4.87)
batches_per_epoch = 1458
batch_size=144
print(f"Experiment #6: batch_size={batch_size}, num_processes=4, num_devices=2")
print("Using 50% of samples")
display_train_timing_info(batches_per_epoch=batches_per_epoch,
batches_per_second=batches_per_second,
batch_size=batch_size)
print("(Extrapolated prediction) Using 100% of samples")
display_train_timing_info(batches_per_epoch=batches_per_epoch*2,
batches_per_second=batches_per_second,
batch_size=batch_size)
###Output
Experiment #6: batch_size=144, num_processes=4, num_devices=2
Using 50% of samples
###Markdown
Experiment 7- Using Accumulate_grad_batches=2Started: 5:45 AM - 2022-03-23 Ended: x:xx AM - 2022-03-23
###Code
batches_per_second = (1/4.81)
batches_per_epoch = 1458
batch_size=144
print(f"Experiment #7: batch_size={batch_size}, num_processes=4, num_devices=2")
print("Using 50% of samples")
display_train_timing_info(batches_per_epoch=batches_per_epoch,
batches_per_second=batches_per_second,
batch_size=batch_size)
print("(Extrapolated prediction) Using 100% of samples")
display_train_timing_info(batches_per_epoch=batches_per_epoch*2,
batches_per_second=batches_per_second,
batch_size=batch_size)
###Output
Experiment #7: batch_size=144, num_processes=4, num_devices=2
Using 50% of samples
###Markdown
Experiment 8- Using Accumulate_grad_batches=2- lr=1e-2- Removed base_callbacks.yaml: -train.callbacks.lr_monitor \ -train.callbacks.early_stopping \ -train.callbacks.model_checkpointStarted: 9:00 AM - 2022-03-23 Ended: x:xx AM - 2022-03-23
###Code
batches_per_second = (1/ 4.26)
batches_per_epoch = 229
batch_size=128
print(f"Experiment #8: batch_size={batch_size}, num_processes=4, num_devices=2")
print("Using 1% of samples")
display_train_timing_info(batches_per_epoch=batches_per_epoch,
batches_per_second=batches_per_second,
batch_size=batch_size)
print("Using 50% of samples")
display_train_timing_info(batches_per_epoch=batches_per_epoch*50,
batches_per_second=batches_per_second,
batch_size=batch_size)
print("(Extrapolated prediction) Using 100% of samples")
display_train_timing_info(batches_per_epoch=batches_per_epoch*100,
batches_per_second=batches_per_second,
batch_size=batch_size)
###Output
Experiment #8: batch_size=128, num_processes=4, num_devices=2
Using 1% of samples
###Markdown
Experiment 11- Using Accumulate_grad_batches=1- lr=0.5e-3- freeze_backbone_up_to=-4Started: 12:25 PM - 2022-03-23 Ended: 2:55 PM - 2022-03-23
###Code
batches_per_second = (1/ 4.53)
batches_per_epoch = 3282
batch_size=128
print(f"Experiment #11: batch_size={batch_size}, num_processes=4, num_devices=2")
print("Using 1% of samples")
display_train_timing_info(batches_per_epoch=batches_per_epoch,
batches_per_second=batches_per_second,
batch_size=batch_size)
print("Using 50% of samples")
display_train_timing_info(batches_per_epoch=batches_per_epoch*50,
batches_per_second=batches_per_second,
batch_size=batch_size)
print("(Extrapolated prediction) Using 100% of samples")
display_train_timing_info(batches_per_epoch=batches_per_epoch*100,
batches_per_second=batches_per_second,
batch_size=batch_size)
###Output
Experiment #11: batch_size=128, num_processes=4, num_devices=2
Using 1% of samples
###Markdown
Experiment 12- Using Accumulate_grad_batches=1- lr=1e-2- freeze_backbone_up_to=-4- batch_size=128- preprocess_size=256- resolution=224Started: 3:00 PM - 2022-03-23 Ended: x:xx AM - 2022-03-23
###Code
batches_per_second = (1/ 2.58)
batches_per_epoch = 3282
batch_size=128
print(f"Experiment #8: batch_size={batch_size}, num_processes=4, num_devices=2")
print("Using 100% of samples")
display_train_timing_info(batches_per_epoch=batches_per_epoch,
batches_per_second=batches_per_second,
batch_size=batch_size)
print("Using 50% of samples")
display_train_timing_info(batches_per_epoch=batches_per_epoch/2,
batches_per_second=batches_per_second,
batch_size=batch_size)
print("(Extrapolated prediction) Using 1% of samples")
display_train_timing_info(batches_per_epoch=batches_per_epoch/100,
batches_per_second=batches_per_second,
batch_size=batch_size)
###Output
Experiment #8: batch_size=128, num_processes=4, num_devices=2
Using 100% of samples
###Markdown
Experiment 13(Running in parallel to 12, since 4 GPUs just opened up.Tried doubling the scaling of the lr to accomodate the doubling of the of GPUs- Increased num_devices from 2->4- Using Accumulate_grad_batches=1- lr=2e-2- freeze_backbone_up_to=-4- batch_size=128- preprocess_size=256- resolution=224Started: 3:52 PM - 2022-03-23 Ended: x:xx AM - 2022-03-23
###Code
batches_per_second = (1/ 4.2)
batches_per_epoch = 1642
batch_size=128
print(f"Experiment #8: batch_size={batch_size}, num_processes=4, num_devices=2")
print("Using 100% of samples")
display_train_timing_info(batches_per_epoch=batches_per_epoch,
batches_per_second=batches_per_second,
batch_size=batch_size)
print("Using 50% of samples")
display_train_timing_info(batches_per_epoch=batches_per_epoch/2,
batches_per_second=batches_per_second,
batch_size=batch_size)
print("(Extrapolated prediction) Using 1% of samples")
display_train_timing_info(batches_per_epoch=batches_per_epoch/100,
batches_per_second=batches_per_second,
batch_size=batch_size)
###Output
Experiment #8: batch_size=128, num_processes=4, num_devices=2
Using 100% of samples
###Markdown
Experiment 14- Increased num_devices from 4- Using Accumulate_grad_batches=2- lr=1e-3- freeze_backbone=False- batch_size=64- preprocess_size=256- resolution=224Started: 5:00 PM - 2022-03-23 Ended: x:xx AM - 2022-03-24
###Code
print(f"{1/((3282)/(90*60)):.2f}")
batches_per_second = (1/1.65 )
batches_per_epoch = 3282
batch_size=64
print(f"Experiment #8: batch_size={batch_size}, num_processes=4, num_devices=2")
print("Using 100% of samples")
display_train_timing_info(batches_per_epoch=batches_per_epoch,
batches_per_second=batches_per_second,
batch_size=batch_size)
print("Using 50% of samples")
display_train_timing_info(batches_per_epoch=batches_per_epoch/2,
batches_per_second=batches_per_second,
batch_size=batch_size)
print("(Extrapolated prediction) Using 1% of samples")
display_train_timing_info(batches_per_epoch=batches_per_epoch/100,
batches_per_second=batches_per_second,
batch_size=batch_size)
229/16/60
27.96*2
###Output
_____no_output_____ |
Python_Notebook.ipynb | ###Markdown
Type Conversions
###Code
#This is type conversion code
result='10'
print(type(result))
print(float(result))
print(int(result))
#Converting string into list
mystring='abcdef'
print(type(mystring))
print(list(mystring))
print(tuple(mystring))
#converting a list to Tuple
mylist=['abc',10,'def',25.5]
print(type(mylist))
print(tuple(mylist))
#converting list to string
changed_string=str(mylist)
print(changed_string)
print(type(changed_string))
#working with dictionary
#converting dictionary to list - it will consider only values and not keys
my_dict={'name':'Yatheesh', 'age': 22}
print(list(my_dict))
#converting range to list and tuple
my_range=range(10)
print(list(my_range))
print(tuple(my_range))
###Output
_____no_output_____
###Markdown
OPERATORS Arithematic Operators
###Code
#ARITHEMATIC OPERATORS
#Addition (+)
print(2+3)
print('hello'+'world')
print(10.234+63.248)
list1=['hello','Yatheesh', 10,55]
list2=['i am','God',100,6348]
print(list1+list2)
print(False+True)
#list and tuple cannot be added
#Subtraction (-)
print(2-3)
print(10.23-6.874)
print(True-True)
#Multiplication(*)
print('&&&&&&&&&&&&&&&&&&&')
print('&'*15)
mystringo='class'
print(mystringo*5)
#Division(/)
print(98/32)
print(False/6)
#Floor Division (//) - Removes decimal places and keeps only integer
print(7/3)
print(7//3)
#exponents(**)
print(2**5)
print(25**6)
#Modulus(%)
print(18.368%4)
###Output
_____no_output_____
###Markdown
Assigment Operators
###Code
#ASSIGNMENT OPERATORS
var1=10
var2=10
var1+=20 #var1=var1+20
var2-=30 #var2=var2-20
print(var1,var2)
###Output
_____no_output_____
###Markdown
Comparison Operators
###Code
#COMPARISON OPERATORS
var1=10
var2=20
var3=20
print(var1==var2)
print(var2==var3)
print(10==20)
print('hello'=='helllooo')
print(False==False)
print(10<20)
print(20.34>845.368)
###Output
_____no_output_____
###Markdown
Logical Operators
###Code
#LOGICAL OPERATORS
print(True and True)
print(True and False)
print(True or False)
var1=10
var2=55
str1='hello'
str2='hello'
print(var1<var2 and str1 != str2)
###Output
_____no_output_____
###Markdown
###Code
# Importing the libraries we will need
# Importing the pandas library
#
import pandas as pd
# Importing the numpy library
#
import numpy as np
###Output
_____no_output_____
###Markdown
Reading our Dataset from CSV file
###Code
# Let's read the data from the CSV file and create the dataframe to be used
#
df = pd.read_csv("//content/Autolib_dataset (2) (1).csv")
df
###Output
_____no_output_____
###Markdown
Previewing our Dataset
###Code
#previewing the first ten rows of our data
df.head(10)
df
###Output
_____no_output_____
###Markdown
Accessing Information about our Dataset
###Code
#information about our dataset
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5000 entries, 0 to 4999
Data columns (total 25 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Address 5000 non-null object
1 Cars 5000 non-null int64
2 Bluecar counter 5000 non-null int64
3 Utilib counter 5000 non-null int64
4 Utilib 1.4 counter 5000 non-null int64
5 Charge Slots 5000 non-null int64
6 Charging Status 5000 non-null object
7 City 5000 non-null object
8 Displayed comment 111 non-null object
9 ID 5000 non-null object
10 Kind 5000 non-null object
11 Geo point 5000 non-null object
12 Postal code 5000 non-null int64
13 Public name 5000 non-null object
14 Rental status 5000 non-null object
15 Scheduled at 47 non-null object
16 Slots 5000 non-null int64
17 Station type 5000 non-null object
18 Status 5000 non-null object
19 Subscription status 5000 non-null object
20 year 5000 non-null int64
21 month 5000 non-null int64
22 day 5000 non-null int64
23 hour 5000 non-null int64
24 minute 5000 non-null int64
dtypes: int64(12), object(13)
memory usage: 976.7+ KB
###Markdown
Cleaning our Dataset
###Code
df.drop(['Displayed comment', 'ID', 'Geo point', 'Address', 'Cars'], axis = 1, inplace = True)
df
#Outliers
Q1 = df.quantile(0.25)
Q3 = df.quantile(0.75)
IQR = Q3 - Q1
(df < (Q1 - 1.5 * IQR)) |(df > (Q3 + 1.5 * IQR))
###Output
_____no_output_____
###Markdown
Completeness
###Code
#finding missing values
df.isnull().any()
###Output
_____no_output_____
###Markdown
Consistency
###Code
df[df.duplicated()]
df2=df.drop_duplicates()
df.head()
###Output
_____no_output_____
###Markdown
Uniformity
###Code
# Creating a new column where we find the difference in the number of bluecars at the station
df2['Bluecar_Diff'] = df2['Bluecar counter'].diff()
df2.head()
###Output
_____no_output_____
###Markdown
The most popular hour of the dayfor picking up a shared electriccar(Bluecar) in the city of Paris over the month of April 2018
###Code
df2[df2['Bluecar_Diff'] < 0].groupby('hour')['hour'].count().sort_values(ascending= False)
###Output
_____no_output_____
###Markdown
The most popularhour for returning cars?
###Code
df2[df2['Bluecar_Diff'] > 0].groupby('hour')['hour'].count().sort_values(ascending= False)
###Output
_____no_output_____
###Markdown
What station is the most popular
###Code
#overall
df2[(df2['Kind'] == 'STATION') & (df2['Status'] == 'ok')].groupby('Public name')['Public name'].count().sort_values(ascending= False)
###Output
_____no_output_____
###Markdown
What station is at the most popular picking hour
###Code
df2[(df2['Kind'] == 'STATION') & (df2['Status'] == 'ok') & (df2['hour'] == 21)].groupby('Public name')['Public name'].count().sort_values(ascending= False)
###Output
_____no_output_____
###Markdown
What postal code is the most popular for picking up Blue cars? Does the most popular station belong to that postal code?Overall?
###Code
df2[(df2['Kind'] == 'STATION') & (df2['Status'] == 'ok')].groupby('Postal code')['Postal code'].count().sort_values(ascending= False)
###Output
_____no_output_____
###Markdown
What postal code is the most popular for picking up Blue cars? Does the most popular station belong to that postal code?
###Code
#At the most popular picking hour?
df2[(df2['Kind'] == 'STATION') & (df2['Status'] == 'ok') & (df2['hour'] == 4)].groupby('Postal code')['Postal code'].count().sort_values(ascending= False)
###Output
_____no_output_____
###Markdown
Utilib counter
###Code
#creating a new column where we find the difference in the number of utilib at the counter
df2['Utilib_Diff'] = df2['Utilib counter'].diff()
df2.head()
#The most popular hour of the dayfor picking up a shared electriccar(Utilib) in the city of Paris over the month of April 2018
df2[df2['Utilib_Diff'] < 0].groupby('hour')['hour'].count().sort_values(ascending= False)
#What is the most popular hour for returning cars?
df2[df2['Utilib_Diff'] > 0].groupby('hour')['hour'].count().sort_values(ascending= False)
#What station is the most popular?
#Overall
df2[(df2['Kind'] == 'STATION') & (df2['Status'] == 'ok') & (df2['hour'] == 5)].groupby('Public name')['Public name'].count().sort_values(ascending= False)
#What station is the most popular?
#At the most popular picking hour?
df2[(df2['Kind'] == 'STATION') & (df2['Status'] == 'ok') & (df2['hour'] == 5)].groupby('Postal code')['Postal code'].count().sort_values(ascending= False)
#What postal code is the most popular for picking up utilib? Does the most popular station belong to that postal code?
#At the most popular picking hour?
df2[(df2['Kind'] == 'STATION') & (df2['Status'] == 'ok') & (df2['hour'] == 4)].groupby('Postal code')['Postal code'].count().sort_values(ascending= False)
###Output
_____no_output_____ |
predictive-analytics/foundations-of-predictive-analytics-in-python/foundations-of-predictive-analytics-in-python-2.ipynb | ###Markdown
1. The basetable timeline
###Code
import pandas as pd
import datetime
def min_max(column):
return pd.Series(index=['min','max'], data=[column.min(), column.max()])
gifts = pd.read_csv('data2/gifts.csv')
gifts.info()
gifts.head(2)
gifts['date'] = pd.to_datetime(gifts['date'])
gifts.info()
gifts.head(2)
gifts.apply(min_max)
gifts.describe()
gifts.drop(['Unnamed: 0'], axis=1, inplace=True)
gifts.head()
start_target = datetime.datetime(year=2018, month=5, day=1)
end_target = datetime.datetime(year=2018, month=8, day=1)
start_target, end_target
gifts_target = gifts[(gifts['date']>=start_target) & (gifts['date']<end_target)]
gifts_target.info()
gifts_pred_variables = gifts[(gifts['date']<start_target)]
gifts_pred_variables.info()
gifts_pred_variables.head(2)
gifts_pred_variables.apply(min_max)
###Output
_____no_output_____
###Markdown
1.1. The population
###Code
donation_2016 = gifts[gifts['date'].dt.year==2016]
donation_2016.info()
donors_include = set(donation_2016['id'])
len(donors_include)
donation_2017 = gifts[(gifts['date'].dt.year==2017) & (gifts['date'].dt.month<5)]
donation_2017.info()
donors_exclude = set(donation_2017['id'])
len(donors_exclude)
population = donors_include.difference(donors_exclude)
len(population)
###Output
_____no_output_____
###Markdown
Population is the list of people who unsubscribe in 2017. Since you have **12,062** ids in 2016 and **2,305** ids in 2017 1.2. The target
###Code
basetable = pd.read_csv('data2/basetable.csv')
basetable.info()
basetable.head()
basetable.describe()
basetable2 = pd.read_csv('data2/basetable_ex_2_13.csv')
basetable2.info()
basetable2.head()
basetable2.apply(min_max)
basetable3 = pd.read_csv('data2/basetable_interactions.csv')
basetable3.info()
basetable3.head()
basetable3.apply(min_max)
living_places = pd.read_csv('data2/living_places.csv')
living_places.info()
living_places.iloc[:, 1:3] = living_places.iloc[:, 1:3].apply(pd.to_datetime, errors='coerce')
living_places.info()
living_places.head()
living_places.apply(min_max)
basetable['target'] = pd.Series([1 if donor_id in population else 0 for donor_id in basetable['donor_ID']])
basetable.info()
basetable.head()
basetable['target'].value_counts()
'''Target period'''
start_target = datetime.datetime(year=2017, month=1, day=1)
end_target = datetime.datetime(year=2018, month=1, day=1)
start_target, end_target
'''Target period donation'''
gifts_target = gifts[(gifts['date']>=start_target) & (gifts['date']<end_target)]
gifts_target.info()
gifts_target.head(2)
'''Group and sum donations by donor'''
gifts_target_byid = gifts_target.groupby('id')['amount'].sum().reset_index()
gifts_target_byid.info()
gifts_target_byid.head(2)
'''Derive targets and add to basetable'''
targets = list(gifts_target_byid['id'][gifts_target_byid['amount']>500])
targets
# basetable['target'] = pd.Series([1 if donor_id in gifts_target_byid['id'].values.tolist() else 0 for donor_id in basetable['donor_ID']])
basetable['target'] = pd.Series([1 if donor_id in targets else 0 for donor_id in basetable['donor_ID']])
basetable.info()
basetable.head()
basetable['target'].value_counts()
###Output
_____no_output_____
###Markdown
2. Adding predictive variables
###Code
reference_date = datetime.datetime(2018,4,1)
reference_date
living_places.head(2)
living_places['active_period'] = living_places['end_date'] - living_places['start_date']
living_places.head(2)
living_places['lifetime'] = reference_date - living_places['start_date']
living_places.head(2)
living_places_reference_date = living_places[(living_places['start_date']<=reference_date) & (living_places['end_date']>reference_date)]
living_places_reference_date.info()
living_places_reference_date.head(2)
###Output
_____no_output_____
###Markdown
2.1. Adding aggregated variables
###Code
'''Start and end date of the aggregation method'''
start_date = datetime.datetime(2016,1,1)
end_date = datetime.datetime(2017,1,1)
start_date, end_date
'''Select gifts made in 2016'''
gifts_2016 = gifts[(gifts['date']>=start_date) & (gifts['date']<=end_date)]
gifts_2016.info()
gifts_2016.head(2)
'''Sum of gifts per donor in 2016'''
gifts_2016_bydonor = gifts_2016.groupby(['id'])['amount'].sum().reset_index()
gifts_2016_bydonor.info()
gifts_2016_bydonor.columns = ['donor_ID','sum_2016']
gifts_2016_bydonor.head(2)
basetable.head(2)
'''Add sum of gifts to the basetable'''
basetable = pd.merge(basetable, gifts_2016_bydonor, how='left', on='donor_ID')
basetable.info()
basetable.head(2)
'''Number of gifts per donor in 2016'''
gifts_2016_bydonor = gifts_2016.groupby(['id']).size().reset_index()
gifts_2016_bydonor.columns = ['donor_ID','count_2016']
gifts_2016_bydonor.head(2)
'''Add sum of gifts to the basetable'''
basetable = pd.merge(basetable, gifts_2016_bydonor, how='left', on='donor_ID')
basetable.info()
basetable.head(2)
###Output
_____no_output_____
###Markdown
2.2. Adding evolutions
###Code
start_2017 = datetime.datetime(2017,1,1)
start_2016 = datetime.datetime(2016,1,1)
start_2015 = datetime.datetime(2015,1,1)
start_2015, start_2016, start_2017
gifts_2016 = gifts[(gifts['date']<start_2017) & (gifts['date']>=start_2016)]
gifts_2016.info()
gifts_2016.head(2)
gifts_2015_and_2016 = gifts[(gifts['date']<start_2017) & (gifts['date']>=start_2015)]
gifts_2015_and_2016.info()
gifts_2015_and_2016.head(2)
number_gifts_2016 = gifts_2016.groupby(['id'])['amount'].size().reset_index()
number_gifts_2016.columns = ['donor_ID','number_gifts_2016']
number_gifts_2016.head(2)
number_gifts_2015_and_2016 = gifts_2015_and_2016.groupby(['id'])['amount'].size().reset_index()
number_gifts_2015_and_2016.columns = ['donor_ID','number_gifts_2015_and_2016']
number_gifts_2015_and_2016.head(2)
basetable = pd.merge(basetable, number_gifts_2016, on='donor_ID', how='left')
basetable = pd.merge(basetable, number_gifts_2015_and_2016, on='donor_ID', how='left')
basetable.head(2)
basetable.describe()
'''Calculate ratio of last month and last year average'''
basetable['ratio_2015_to_2015_and_2016'] = basetable['number_gifts_2016'] / basetable['number_gifts_2015_and_2016']
basetable.describe()
###Output
_____no_output_____
###Markdown
2.3. Using evolution variables
###Code
from sklearn import linear_model
variables = ['gender','age','donation_last_year','ratio_month_year']
###Output
_____no_output_____ |
Case Study 1/Reservoir Engineering/Reservoir Engineering.ipynb | ###Markdown
**Reservoir Engineering**
###Code
# importing basic libraries
import pandas as pd
from pandas import DataFrame
import numpy as np
import requests
import random
import xlrd
import csv
from datetime import datetime
import os
import warnings
warnings.filterwarnings('ignore')
from datetime import datetime
# visualization/plotting libraries
import matplotlib as mpl
import matplotlib.style
import seaborn as sns
import matplotlib.pyplot as plt
# setting to default parameters
plt.rcParams.update(plt.rcParamsDefault)
# formatting for decimal places
pd.set_option("display.float_format", "{:.2f}".format)
sns.set_style("white")
from scipy.optimize import curve_fit
from sklearn.metrics import mean_squared_error
from math import sqrt
from statsmodels.tsa.stattools import acf, pacf
from statsmodels.tsa.stattools import adfuller
from statsmodels.tsa.arima_model import ARIMA
# matplotlib settings
mpl.rcParams.update(mpl.rcParamsDefault)
plt.style.use('seaborn-white')
mpl.rcParams["figure.figsize"] = (12, 8)
mpl.rcParams["axes.grid"] = False
# setting seed for model reproducibility
seed_value = 42
os.environ['PYTHONHASHSEED'] = str(seed_value)
random.seed(seed_value)
np.random.seed(seed_value)
from google.colab import files
uploaded = files.upload()
# setting the destination for the data folder
path = os.path.join(os.getcwd(), "data")
norm_path = os.path.normpath(path)
# defining a function to scrape NDIC data
# https://www.dmr.nd.gov/oilgas/
# data from May 2015 to December 2018 will be used as a training dataset
# data from 2019 will be used as a test dataset
# function to scrape data from NDIC
def scrape_ndic(months_list):
'''function to scrape NDIC data'''
# link to website with production data
website = "https://www.dmr.nd.gov/oilgas/mpr/"
df = pd.DataFrame()
# loop through all of the dates in the list
for period in months_list:
url = website + period + ".xlsx"
req = requests.get(url)
book = xlrd.open_workbook(file_contents=req.content)
sheet = book.sheet_by_index(0)
for i in range(1, sheet.nrows):
temp_value = sheet.cell_value(i, 0)
year, month, day, hour, minute, second = xlrd.xldate_as_tuple(temp_value, book.datemode)
sheet._cell_values[i][0] = datetime(year, month, 1).strftime("%m/%Y")
new_file = (path + '\\'+ period + ".csv")
csv_file = open(new_file, "w", newline="")
writer = csv.writer(csv_file)
# iteration through each row for data pull
for rownum in range(sheet.nrows):
writer.writerow(sheet.row_values(rownum))
csv_file.close()
df = pd.read_csv(new_file)
df = df.append(df)
# dataframe with entire monthly production
return df
!unzip data
train_list = ["2015_05","2015_06","2015_07","2015_08","2015_09","2015_10","2015_11","2015_12",
"2016_01","2016_02","2016_03","2016_04","2016_05","2016_06","2016_07","2016_08","2016_09","2016_10","2016_11","2016_12",
"2017_01","2017_02","2017_03","2017_04","2017_05","2017_06","2017_07","2017_08","2017_09","2017_10","2017_11","2017_12",
"2018_01","2018_02","2018_03","2018_04","2018_05","2018_06","2018_07","2018_08","2018_09","2018_10","2018_11","2018_12"]
train_prod_data = scrape_ndic(train_list)
train_prod_data["ReportDate"] = pd.to_datetime(train_prod_data["ReportDate"])
#train_prod_data.to_csv("train_prod.csv")
test_list = ["2019_01","2019_02","2019_03","2019_04","2019_05","2019_06","2019_07","2019_08","2019_09","2019_10","2019_11","2019_12"]
test_prod_data = scrape_ndic(test_list)
test_prod_data["ReportDate"] = pd.to_datetime(test_prod_data["ReportDate"])
#test_prod_data.to_csv("test_prod.csv")
# ARPS Decline Curve Analysis
def pre_process(df, column):
df.drop("Unnamed: 0", axis=1, inplace=True)
df.info()
print(df.columns)
# descriptive statistics
df.describe().T
df.head(15)
df.nunique()
df.dtypes
df.shape
# filtering
df.dropna(inplace=True)
# drop rows where oil rate is 0
df = df[(df[column].notnull()) & (df[column] > 0)]
return df
def plot_production_rate(df):
'''Plot decline curve using production rates'''
sns.lineplot(x = df['ReportDate'], y = df['oil_rate'], markers=True, dashes=False,
label="Oil Production",color='blue', linewidth=1.5)
plt.title('Decline Curve', fontweight='bold', fontsize = 20)
plt.xlabel('Time', fontweight='bold', fontsize = 15)
plt.ylabel('Oil Production Rate (bbl/d)', fontweight='bold', fontsize = 15)
plt.show()
def decline_curve(curve_type, q_i):
if curve_type == "exponential":
def exponential_decline(T, d):
return q_i * np.exp(-d * T)
return exponential_decline
elif curve_type == "hyperbolic":
def hyperbolic_decline(T, d_i, b):
return q_i / np.power((1 + b * d_i * T), 1.0 / b)
return hyperbolic_decline
elif curve_type == "harmonic":
def parabolic_decline(T, d_i):
return q_i / (1 + d_i * T)
return parabolic_decline
else:
raise "Unknown Decline Curve!"
def L2_norm(Q, Q_obs):
return np.sum(np.power(np.subtract(Q, Q_obs), 2))
# reading train and test data
train_prod = pd.read_csv('data/train_prod.csv')
test_prod = pd.read_csv("data/test_prod.csv")
# Basic Processing and data exploration
train_prod = pre_process(train_prod, 'Oil')
test_prod = pre_process(test_prod, 'Oil')
# convert time to datetime and set as dataframe index
train_prod["ReportDate"] = pd.to_datetime(train_prod["ReportDate"])
test_prod["ReportDate"] = pd.to_datetime(test_prod["ReportDate"])
#bakken_data.set_index("ReportDate", inplace=True)
train_prod["First_Prod_Date"] = train_prod.groupby("API_WELLNO")["ReportDate"].transform('min')
train_prod["Days_Online"] = (train_prod["ReportDate"] - train_prod["First_Prod_Date"]).dt.days
# find the top 10 wells with highest production (sum)
grouped_data = train_prod.groupby(['API_WELLNO']).sum()
grouped_data = grouped_data.sort_values(by=['Oil'])
grouped_data = grouped_data.nlargest(10, 'Oil').reset_index()
example_wells = grouped_data['API_WELLNO'].to_numpy()
print (example_wells)
demo_well = [33053059210000, 33025021780000]
print('API:', demo_well)
df_temp = train_prod[train_prod['API_WELLNO'] == demo_well[1]]
df_temp["oil_rate"] = df_temp["Oil"] / df_temp["Days"]
df_temp['date_delta'] = (df_temp['ReportDate'] - df_temp['ReportDate'].min())/np.timedelta64(1,'D')
plot_production_rate(df_temp)
df_temp = df_temp[['date_delta', 'oil_rate']]
data = df_temp.to_numpy()
# T is number of days of production - cumulative
# q is production rate
T_train, q = data.T
print(T_train)
print(q)
# Assumption - determine qi from max value of first 3 months of production
df_initial_period = df_temp.head(3)
qi = df_initial_period['oil_rate'].max()
exp_decline = decline_curve("exponential", qi)
hyp_decline = decline_curve("hyperbolic", qi)
har_decline = decline_curve("harmonic", qi)
popt_exp, pcov_exp = curve_fit(exp_decline, T_train, q, method="trf")
popt_hyp, pcov_hyp = curve_fit(hyp_decline, T_train, q, method="trf")
popt_har, pcov_har = curve_fit(har_decline, T_train, q, method="trf")
print("L2 Norm of exponential decline: ", L2_norm(exp_decline(T_train, popt_exp[0]), q))
print("L2 Norm of hyperbolic decline decline: ",L2_norm(hyp_decline(T_train, popt_hyp[0], popt_hyp[1]), q))
print("L2 Norm of harmonic decline decline: ", L2_norm(har_decline(T_train, popt_har[0]), q))
# Predict
plt.scatter(T_train, q, color="black", marker="x", alpha=1)
pred_exp = exp_decline(T_train, popt_exp[0])
pred_hyp = hyp_decline(T_train, popt_hyp[0], popt_hyp[1])
pred_har = har_decline(T_train, popt_har[0])
plt.plot(T_train, pred_exp, color="red", label="Exponential", linewidth = 4)
plt.plot(T_train, pred_hyp, color="green", label="Hyperbolic", linestyle="--", linewidth = 4)
plt.plot(T_train, pred_har, color="blue", label="Harmonic", linestyle = ':', linewidth = 4)
plt.title('History Match', fontweight='bold', fontsize = 20)
plt.xlabel('Time', fontweight='bold', fontsize = 15)
plt.ylabel('Oil Production Rate (bbl/d)', fontweight='bold', fontsize = 15)
plt.legend(loc='best')
plt.show()
# Forecast
max_time_forecast = 5000
T_pred = np.linspace(min(T_train), max_time_forecast)
plt.scatter(T_train, q, color="black", marker="x", alpha=1)
pred_exp = exp_decline(T_pred, popt_exp[0])
pred_hyp = hyp_decline(T_pred, popt_hyp[0], popt_hyp[1])
pred_har = har_decline(T_pred, popt_har[0])
plt.plot(T_pred, pred_exp, color="red", label="Exponential", linewidth = 4)
plt.plot(T_pred, pred_hyp, color="green", label="Hyperbolic", linestyle="--", linewidth = 4)
plt.plot(T_pred, pred_har, color="blue", label="Harmonic", linestyle = ':', linewidth = 4)
plt.title('Forecast', fontweight='bold', fontsize = 20)
plt.xlabel('Time', fontweight='bold', fontsize = 15)
plt.ylabel('Oil Production Rate (bbl/d)', fontweight='bold', fontsize = 15)
plt.legend(loc='best')
plt.show()
# validation procedure
print('API:', demo_well[1])
df_temp_test = test_prod[test_prod['API_WELLNO'] == demo_well[1]]
df_temp_test["oil_rate"] = df_temp_test["Oil"] / df_temp_test["Days"]
df_temp_test['date_delta'] = (df_temp_test['ReportDate'] - df_temp_test['ReportDate'].min()) / np.timedelta64(1,'D')
print(df_temp_test)
df_temp_test = df_temp_test[['date_delta', 'oil_rate']]
data_test = df_temp_test.to_numpy()
# T is number of days of production - cumulative
# q is production rate
T_test, q_test = data_test.T
#T_test = np.concatenate(T_train, T)
print(T_test)
print(q_test)
time = pd.date_range(start='6/1/2015', periods= 54, freq='MS')
time
T_Test2 = T_train[-1] + T_test
len(T_train)
pred_hyp = hyp_decline(T_train, popt_hyp[0], popt_hyp[1])
pred_hyp2 = hyp_decline(T_Test2, popt_hyp[0], popt_hyp[1])
print(pred_hyp)
print(pred_hyp2)
# forecast
q_orig = np.append(q, q_test)
forecast = np.concatenate([pred_hyp, pred_hyp2])
# hyperbolic forecast - plot
plt.plot(time, q_orig, color="black", alpha = 0.8, label='Actual Data', linewidth = 4)
plt.plot(time, forecast, color="green", label="Hyperbolic Trend", linewidth = 4, linestyle="--")
plt.title('Production Forecast', fontweight='bold', fontsize = 20)
plt.xlabel('Time', fontweight='bold', fontsize = 15)
plt.ylabel('Oil Production Rate (bbl/d)', fontweight='bold', fontsize = 15)
plt.legend(loc='best')
plt.show()
rmse = sqrt(mean_squared_error(q_orig, forecast))
print("RMSE - Hyperbolic Method:", rmse)
###Output
RMSE - Hyperbolic Method: 81.41922167927639
###Markdown
**ARIMA MODEL BASED DCA**
###Code
def plot_production_series(series):
plt.figure(figsize=(10, 6))
plt.plot(series, color = 'blue')
plt.title("Oil Production Decline")
plt.xlabel("Year")
plt.ylabel("Production Rate (bbls/d)")
plt.show()
# data
train_prod = pd.read_csv('data/train_prod.csv')
test_prod = pd.read_csv("data/test_prod.csv")
print('Training Data:\n', train_prod.head(10))
print('\n')
print('Test Data:\n', train_prod.head(10))
# Preprocessing on train data
# well selection for demo - time series
train_prod = train_prod[train_prod["API_WELLNO"] == 33025021780000.0]
train_prod.drop("Unnamed: 0", axis=1, inplace=True)
train_prod["ReportDate"] = pd.to_datetime(train_prod["ReportDate"])
train_prod.set_index("ReportDate", inplace=True)
train_prod.nunique()
# converting data from dataframe to series - oil production
timeseries_train= train_prod["Oil"]
timeseries_train.head()
plot_production_series(timeseries_train)
# Preprocessing on test data
# well selection for demo - time series
test_prod = test_prod[test_prod["API_WELLNO"] == 33025021780000.0]
test_prod.drop("Unnamed: 0", axis=1, inplace=True)
test_prod["ReportDate"] = pd.to_datetime(test_prod["ReportDate"])
test_prod.set_index("ReportDate", inplace=True)
test_prod.nunique()
# time series is production volumes and not flow rates
timeseries_test = test_prod["Oil"]
timeseries_test.head()
plot_production_series(timeseries_test)
# ADF - Augmented Dickey-Fuller unit root test - to test stationarity
print("p-value:", adfuller(timeseries_train.dropna())[1])
# Perform Dickey-Fuller test:
def dickey_ful_test(series):
print("Results of Dickey-Fuller Test:")
df_test = adfuller(series, autolag="AIC")
df_output = pd.Series(df_test[0:4],index=["Test Statistic","p-value","#Lags Used","Number of Observations Used"])
for key, value in df_test[4].items():
df_output["Critical Value (%s)" % key] = value
print(df_output)
def stationary_test_plot(metric, data_series, method):
plt.figure(figsize=(10, 6))
orig = plt.plot(data_series, label="Original", color = 'blue')
metric = plt.plot(metric, label= method, color ='red')
plt.legend(loc="best")
plt.title(method)
plt.xlabel('Time (yyyy-mm)')
plt.ylabel('Oil Production (bbls)')
plt.show()
def stationary_test(data_series, method):
rolling_mean = data_series.rolling(10).mean()
stationary_test_plot(rolling_mean, data_series, method)
dickey_ful_test(data_series)
# test if the time series data is stationary or not
stationary_test(timeseries_train, "Rolling Mean")
def plot_time_series(y_axis, x_label, y_label, title):
plt.figure(figsize=(10, 6))
plt.plot(y_axis, label = y_label, color = 'blue')
plt.title(title)
plt.xlabel(x_label)
plt.ylabel(y_label)
plt.show()
# y axis transformation - log(data)
ts_log = np.log(timeseries_train)
plot_time_series(ts_log, "Time (yyyy-mm)", "log (Oil Production (bbls))", "Plot with Log transformation")
# rolling mean estimation and plot
rolling_mean_log = ts_log.rolling(10).mean()
plt.figure(figsize=(10, 6))
orig = plt.plot(ts_log, label="Original", color = 'blue')
mean = plt.plot(rolling_mean_log, label="Rolling Mean", color ='red')
plt.title("Rolling Mean - With Log Transformation")
plt.xlabel('Time (yyyy-mm)')
plt.ylabel('log(Oil Production (bbls))')
plt.legend(loc="best")
plt.show()
# plot of difference between log(data) and moving average
diff_log_rolmean = ts_log - rolling_mean_log
diff_log_rolmean.dropna(inplace=True)
stationary_test(diff_log_rolmean, "Diff - Log and Rolling Mean")
# exponential weighted calculations
weighted_avg_exp = ts_log.ewm(halflife=2).mean()
plt.figure(figsize=(10, 6))
orig = plt.plot(ts_log, label="Original", color = 'blue')
mean = plt.plot(weighted_avg_exp, label="Exponential Weighted Mean", color ='red')
plt.title("Exponential Weighted Mean - With Log Transformation")
plt.xlabel('Time (yyyy-mm)')
plt.ylabel('log(Oil Production (bbls))')
plt.legend(loc="best")
plt.show()
diff_log_ewm = ts_log - weighted_avg_exp
stationary_test(diff_log_ewm, "Diff - Log and Exponential Weighted Mean")
# First Order differencing - n this technique, we take the difference of the observation at a particular instant with
# that at the previous instant. This mostly works well in improving stationarity
# Differencing can help stabilize the mean of the time series by removing changes in the level of a time series,
# and so eliminating (or reducing) trend and seasonality
# https://machinelearningmastery.com/difference-time-series-dataset-python/
first_order_diff = ts_log - ts_log.shift()
first_order_diff.dropna(inplace=True)
plt.figure(figsize=(10, 6))
stationary_test(first_order_diff, "First Order Difference")
ts_log_diff_active = first_order_diff
lag_acf = acf(ts_log_diff_active, nlags=5)
lag_pacf = pacf(ts_log_diff_active, nlags=5, method="ols")
plt.figure(figsize=(10, 5))
# Plot ACF:
plt.subplot(121)
plt.plot(lag_acf, color = 'blue')
plt.axhline(y=0, linestyle="--", color="gray")
plt.axhline(y=-1.96 / np.sqrt(len(ts_log_diff_active)), linestyle="--", color="gray")
plt.axhline(y=1.96 / np.sqrt(len(ts_log_diff_active)), linestyle="--", color="gray")
plt.title("Autocorrelation Function")
# Plot PACF:
plt.subplot(122)
plt.plot(lag_pacf, color = 'red')
plt.axhline(y=0, linestyle="--", color="gray")
plt.axhline(y=-1.96 / np.sqrt(len(ts_log_diff_active)), linestyle="--", color="gray")
plt.axhline(y=1.96 / np.sqrt(len(ts_log_diff_active)), linestyle="--", color="gray")
plt.title("Partial Autocorrelation Function")
plt.tight_layout()
plt.show()
# Auto-Regressive Model (p=2, d=1, q=0)
model_AR = ARIMA(ts_log, order=(2, 1, 0))
results_ARIMA_AR = model_AR.fit(disp=-1)
plt.figure(figsize=(10, 5))
plt.plot(ts_log_diff_active, color = 'blue')
plt.plot(results_ARIMA_AR.fittedvalues, color="red")
plt.title("RSS: %.3f" % sum((results_ARIMA_AR.fittedvalues - first_order_diff) ** 2))
plt.show()
# Moving Average Model (p=0, d=1, q=2)
model_MA = ARIMA(ts_log, order=(0, 1, 2))
results_ARIMA_MA = model_MA.fit(disp=-1)
plt.figure(figsize=(10, 5))
plt.plot(ts_log_diff_active, color = 'blue')
plt.plot(results_ARIMA_MA.fittedvalues, color="red")
plt.title("RSS: %.3f" % sum((results_ARIMA_MA.fittedvalues - first_order_diff) ** 2))
plt.show()
# Combined ARIMA model (p=2, d=1, q=2)
model = ARIMA(ts_log, order=(2, 1, 2))
results_ARIMA = model.fit(disp=-1)
print(results_ARIMA.summary())
plt.plot(ts_log_diff_active, color = 'blue')
plt.plot(results_ARIMA.fittedvalues, color="red")
plt.title("RSS: %.3f" % sum((results_ARIMA.fittedvalues - first_order_diff) ** 2))
plt.show()
# residual and kde plot
plt.figure(figsize=(10, 5))# plot residual errors
residuals = DataFrame(results_ARIMA.resid)
residuals.plot(legend=None, color = 'blue')
plt.title('Residuals - ARIMA History Match', fontweight='bold', fontsize = 20)
plt.show()
residuals.plot(kind='kde', legend=None, color = 'blue')
plt.title('Kernel Density Estimation - Plot', fontweight='bold', fontsize = 20)
plt.show()
print(residuals.describe())
# forecast - ARIMA model
results_ARIMA.plot_predict(1, 60)
plt.title('ARIMA Model Forecast', fontweight='bold', fontsize = 20)
plt.xlabel('Time (years)', fontweight='bold', fontsize = 15)
plt.ylabel('Production (bbls)', fontweight='bold', fontsize = 15)
plt.show()
# Predictions converted to right units - ARIMA
predictions_ARIMA_diff = pd.Series(results_ARIMA.fittedvalues, copy=True)
predictions_ARIMA_diff_cumsum = predictions_ARIMA_diff.cumsum()
predictions_ARIMA_log = pd.Series(ts_log, index=ts_log.index)
predictions_ARIMA_log = predictions_ARIMA_log.add(predictions_ARIMA_diff_cumsum, fill_value=0)
predictions_ARIMA = np.exp(predictions_ARIMA_log)
print(predictions_ARIMA)
plt.figure(figsize=(12, 8))
plt.plot(timeseries_train, linewidth = 2, color = 'black')
plt.plot(predictions_ARIMA, linestyle = "--", color = 'green', linewidth = 2)
plt.title("RMSE: %.3f" % np.sqrt(sum((predictions_ARIMA - timeseries_train) ** 2) / len(timeseries_train)), fontweight='bold', fontsize = 20)
plt.gca().legend(("Original Decline Curve", "ARIMA Model Decline Curve"))
plt.xlabel('Time (yyyy-mm)', fontweight='bold', fontsize = 15)
plt.ylabel('Oil Production (bbls)', fontweight='bold', fontsize = 15)
plt.show()
forecast = results_ARIMA.forecast(steps=12)[0]
forecast
# invert the differenced forecast results to covert to right units
X = timeseries_train.values
history = [x for x in X]
months_in_year = 12
Month = 1
# invert differenced value
def inverse_difference(history, yhat, interval=1):
return yhat + history[-interval]
for yhat in forecast:
inverted = inverse_difference(history, yhat, months_in_year)
print(Month, inverted)
history.append(inverted)
Month += 1
history
forecast_12_months = history[-12:] # last 12 forecasted values
predictions_ARIMA = predictions_ARIMA.to_numpy()
forecast_12_months = np.array(forecast_12_months)
print(predictions_ARIMA)
print(forecast_12_months)
arima_model_results = np.concatenate((predictions_ARIMA, forecast_12_months))
arima_model_results
timeseries_train.values # oil rate - train
timeseries_test # oil rate - test
forecast_12_months # oil rate - forecast
ts_np = timeseries_train.to_numpy()
ts_forecast = np.array(forecast_12_months)
ts_test_np = timeseries_test.to_numpy()
actual = np.concatenate([ts_np, ts_test_np])
actual = np.delete(actual, -1)
actual
forecast = np.concatenate([predictions_ARIMA, ts_forecast])
forecast = np.delete(forecast, -1)
forecast
time = pd.date_range(start='6/1/2015', periods= 54, freq='MS')
rmse = sqrt(mean_squared_error(actual, forecast))
print("RMSE - ARIMA Method:", rmse)
###Output
_____no_output_____ |
Day_20_BST_from_Preordered_Traversal.ipynb | ###Markdown
ProblemReturn the root node of a binary search tree that matches the given preorder traversal.(Recall that a binary search tree is a binary tree where for every node, any descendant of node.left has a value node.val. Also recall that a preorder traversal displays the value of the node first, then traverses node.left, then traverses node.right.)Example 1:```Input: [8,5,1,7,10,12]Output: [8,5,10,1,7,null,12]``` Note: - 1 <= preorder.length <= 100- The values of preorder are distinct. Solution
###Code
# Definition for a binary tree node.
# class TreeNode:
# def __init__(self, x):
# self.val = x
# self.left = None
# self.right = None
def bst_from_preorder(self, preorder: List[int]) -> TreeNode:
def insert(node_value, root):
while True:
if root.val > node_value:
if not root.left:
root.left = TreeNode(node_value)
break
else:
root = root.left
else:
if not root.right:
root.right = TreeNode(node_value)
break
else:
root = root.right
if len(preorder) == 0:
return
root = TreeNode(preorder[0])
head = root
for i in range(1, len(preorder)):
head = root
insert(preorder[i], head)
return head
###Output
_____no_output_____ |
mini-crops/2020-04-04-EDAMiniCrops.ipynb | ###Markdown
Processing Milwaukee Label (~3K labels) Building on `2020-03-24-EDA-Size.ipynb`Goal is to prep a standard CSV that we can update and populate
###Code
import pandas as pd
import numpy as np
import os
import s3fs # for reading from S3FileSystem
import json # for working with JSON files
import matplotlib.pyplot as plt
pd.set_option('max_colwidth', -1)
SAGEMAKER_PATH = r'/home/ec2-user/SageMaker'
SPLIT_PATH = os.path.join(SAGEMAKER_PATH, 'classify-streetview', 'split-train-test')
MINI_PATH = os.path.join(SAGEMAKER_PATH, 'classify-streetview', 'mini-crops')
###Output
_____no_output_____
###Markdown
Alternative Template - row for ~3K labels x crops appeared in* img_id* heading* crop_id* label* dist_x_left* dist_x_right* dist_y_top* dist_y_bottom
###Code
df_labels = pd.read_csv(os.path.join(SPLIT_PATH, 'restructure_single_labels.csv'))
print(df_labels.shape)
df_labels.head()
df_labels_present = df_labels.loc[df_labels['present_ramp']]
df_labels_present['sv_image_y'].describe(percentiles = [0.25, 0.5, 0.75, 0.9, 0.95, 0.99])
df_labels_present['sv_image_x'].describe(percentiles = [0.05, 0.1, 0.25, 0.5, 0.75, 0.9, 0.95, 0.99])
df_coor = pd.read_csv(os.path.join(MINI_PATH,'mini-crops.csv'), sep = '\t')
df_coor
df_outer = pd.concat([df_labels, df_coor], axis = 1)
df_outer.head(10)
df_outer.columns
# Let's just use a for loop and join back together
list_dfs = []
coor_cols = list(df_coor.columns)
for index, row in df_coor.iterrows():
df_temp_labels = df_labels
for col in coor_cols:
df_temp_labels[col] = row[col]
list_dfs.append(df_temp_labels)
print(df_temp_labels.shape)
# Let's just use a for loop and join back together
list_dfs = []
coor_cols = list(df_coor.columns)
for index, row in df_coor.iterrows():
df_temp_labels = df_labels.copy()
for col in coor_cols:
df_temp_labels[col] = row[col]
list_dfs.append(df_temp_labels)
print(df_temp_labels.shape)
df_concat = pd.concat(list_dfs)
df_concat.shape
df_concat['corner_x'].value_counts()
df_concat.head()
df_concat.to_csv(os.path.join(MINI_PATH, 'merged_crops_template.csv'), index = False)
df_concat.columns
###Output
_____no_output_____
###Markdown
Take the differences
###Code
df_concat['xpt_minus_xleft'] = df_concat['sv_image_x'] - df_concat['x_crop_left']
df_concat['xright_minus_xpt'] = df_concat['x_crop_right'] - df_concat['sv_image_x']
df_concat['ypt_minus_ytop'] = df_concat['sv_image_y'] - df_concat['y_crop_top']
df_concat['ybottom_minus_ypt'] = df_concat['y_crop_bottom'] - df_concat['sv_image_y']
df_concat['xpt_minus_xleft'] = df_concat['sv_image_x'] - df_concat['x_crop_left']
df_concat['xright_minus_xpt'] = df_concat['x_crop_right'] - df_concat['sv_image_x']
df_concat['ypt_minus_ytop'] = df_concat['sv_image_y'] - df_concat['y_crop_top']
df_concat['ybottom_minus_ypt'] = df_concat['y_crop_bottom'] - df_concat['sv_image_y']
positive_mask = (df_concat['xpt_minus_xleft'] > 0) & (df_concat['xright_minus_xpt'] > 0) & (df_concat['ypt_minus_ytop'] > 0) & (df_concat['ybottom_minus_ypt'] > 0)
df_concat['label_in_crop'] = positive_mask
df_concat['label_in_crop'].value_counts()
df_incrop = df_concat.loc[df_concat['label_in_crop']]
df_incrop.shape
df_incrop['crop_num'].value_counts()
df_incrop.head()
df_incrop.to_csv(os.path.join(MINI_PATH, 'Crops_with_Labels.csv'), index = False)
###Output
_____no_output_____
###Markdown
Visualize Label Locations* xpt_minus_xleft - x location in the crop relative to bottom left (0, 0)* ybottom_minus_ypt - y location in the crop relative to bottom left (0, 0)
###Code
df_concat_present = df_concat.loc[df_concat['present_ramp']].drop_duplicates()
df_incrop_present = df_incrop.loc[df_incrop['present_ramp']]
fig = plt.figure(figsize = (18, 3))
colors_list = ['tab:red', 'orange', 'gold', 'forestgreen', 'blue', 'indigo']
for crop_id, crop_name in enumerate(['A', 'B', 'C', 'D', 'E', 'F']):
ax = fig.add_subplot(1, 6, crop_id+1)
x = df_incrop_present['xpt_minus_xleft'].loc[df_incrop_present['crop_num'] == crop_name]
y = df_incrop_present['ybottom_minus_ypt'].loc[df_incrop_present['crop_num'] == crop_name]
ax.plot(x, y, marker = '.', ls = 'none', alpha = 0.6, color = colors_list[int(crop_id)])
#ax.plot(x, y, marker = '.', ls = 'none', alpha = 0.4)
plt.ylim(0, 180)
plt.xlim(0, 180)
plt.title(f'Crop: {crop_name}')
ax.set_yticklabels([])
ax.set_xticklabels([])
plt.tight_layout()
fig2 = plt.figure(figsize = (6, 6))
ax2 = fig2.add_subplot(111)
inside_mask = (df_concat_present['sv_image_y'] < 500) & (df_concat_present['sv_image_x'] > 5) & (df_concat_present['sv_image_x'] < 635)
x_crop = df_concat_present['sv_image_x'].loc[inside_mask]
y_crop = df_concat_present['sv_image_y_bottom_origin'].loc[inside_mask]
ax2.plot(x_crop, y_crop, marker = '.', ls = 'none', alpha = 0.4, color = 'blue', label = 'in crop')
outside_mask = (df_concat_present['sv_image_y'] > 500) | (df_concat_present['sv_image_x'] < 5) | (df_concat_present['sv_image_x'] > 635)
x_all = df_concat_present['sv_image_x'].loc[outside_mask]
y_all = df_concat_present['sv_image_y_bottom_origin'].loc[outside_mask]
ax2.plot(x_all, y_all, marker = '.', ls = 'none', alpha = 0.4, color = 'orange', label = 'outside')
plt.ylim(0, 640)
plt.xlim(0, 640)
plt.legend(loc = 'best')
ax2.set_yticklabels([])
ax2.set_xticklabels([])
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Look at the Present DistributionHow many are within a crop or not
###Code
inside_mask.value_counts()
9600 / (9600 + 462) * 100
###Output
_____no_output_____
###Markdown
Look at Image 1908 Details
###Code
df_incrop.loc[df_incrop['filename'] == '1908_135.jpg']
###Output
_____no_output_____ |
Main Python notebook.ipynb | ###Markdown
9608/22/PRE/O/N/2020Last update: Anuj Verma, 03:16 PM 06/10/2020The cell below declares the variables and arrays that are supposed to be pre-populated.
###Code
ItemCode = ["1001", "6056", "5557", "2568", "4458"]
ItemDescription = ["Pencil", "Pen", "Notebook", "Ruler", "Compass"]
Price = [1.0, 10.0, 100.0, 20.0, 30.0]
NumberInStock = [100, 100, 50, 20, 20]
n = len(ItemCode)
###Output
_____no_output_____
###Markdown
TASK 1.4Write program code to produce a report displaying all the information stored about each item for which the number in stock is below a given level. The planning and identifier table are in the pasudocode file and the markdown respectively.
###Code
ThresholdLevel = int(input("Enter the minumum stock level: "))
for Counter in range(n):
if NumberInStock[Counter] < ThresholdLevel:
print("\nItem Code:", ItemCode[Counter])
print("Item Description:", ItemDescription[Counter])
print("Price:", Price[Counter])
print("Number in stock:", NumberInStock[Counter])
###Output
_____no_output_____
###Markdown
TASK 2.2Design an algorithm to input the four pieces of data about a stock item, form a string according to your format design, and write the string to the text file. First draw a program flowchart, then write the equivalent pseudocode.
###Code
RecordsFile = "Item Records.txt"
FileObject = open(RecordsFile, "a+")
WriteString = ""
NewItemCode = int(input("\nEnter item code: "))
WriteString = ':' + str(NewItemCode)
NewItemDescription = input("Enter item description: ")
WriteString += ':' + NewItemDescription
NewPrice = float(input("Enter new price: "))
WriteString += ':' + str(NewPrice)
NewNumberInStock = int(input("Enter the number of items in stock: "))
WriteString += ':' + str(NewNumberInStock) + '\n'
FileObject.write(WriteString)
FileObject.close()
###Output
_____no_output_____
###Markdown
TASK 2.4The cell below defines the sub-routines which will be used by more than of the tasks.
###Code
def GetItemCode():
TestItemCode = int(input("Enter the code of the item: "))
while not (TestItemCode > 1000 and TestItemCode < 9999):
TestItemCode = int(input("Re-enter the code of the item: "))
return TestItemCode
def GetNumberInStock():
TestNumberInStock = int(input("Enter the number of the item in stock: "))
while not (TestNumberInStock >= 0):
TestNumberInStock = int(input("Re-enter the number of the item in stock: "))
return TestNumberInStock
def GetPrice():
TestPrice = float(input("Enter the price of the item: "))
while not (TestPrice >= 0):
TestPrice = float(input("Re-enter the price of the item: "))
return TestPrice
def ExtractDetails(RecordString, Details):
Position = 0
SearchString = RecordString.strip() + ':'
if RecordString != "":
for Counter in range(4):
Position += 1
CurrentCharacter = SearchString[Position : Position + 1]
while CurrentCharacter != ':':
Details[Counter] += CurrentCharacter
Position += 1
CurrentCharacter = SearchString[Position : Position + 1]
###Output
_____no_output_____
###Markdown
TASK 2.4 (1)Add a new stock item to the text file. Include validation of the different pieces of information as appropriate. For example item code data may be a fixed format.
###Code
WriteString = ""
WriteString = ':' + str(GetItemCode())
NewItemDescription = input("\nEnter item description: ")
WriteString += ':' + NewItemDescription
WriteString += ':' + str(GetPrice())
WriteString += ':' + str(GetNumberInStock()) + '\n'
FileObject = open(RecordsFile, "a+")
FileObject.write(WriteString)
FileObject.close()
###Output
_____no_output_____
###Markdown
TASK 2.4 (2)Search for a stock item with a specific item code. Output the other pieces of data together with suitable supporting text.
###Code
Found = False
CurrentRecord = ""
print("\nEnter the code of the item you want to search for.")
DesiredItemCode = GetItemCode()
FileObject = open(RecordsFile, "r+")
FileData = FileObject.readlines()
FileObject.close()
for record in FileData:
CurrentRecord = record
if CurrentRecord[1:5] == str(DesiredItemCode):
Found = True
break
if Found:
DetailsOfRecord = ["" for i in range(4)]
ExtractDetails(CurrentRecord, DetailsOfRecord)
print("\nItem Code: " + str(DetailsOfRecord[0]))
print("Item Description: " + DetailsOfRecord[1])
print("Price of item: " + str(DetailsOfRecord[2]))
print("Number of the item in stock: " + str(DetailsOfRecord[3]))
else:
print("Item not found.")
###Output
_____no_output_____
###Markdown
TASK 2.4 (3)Search for all stock items with a specific item description, with output as for task 2.
###Code
DesiredItemDescription = input("\nEnter the description of the item you want to search for: ")
FileObject = open(RecordsFile, "r+")
FileData = FileObject.readlines()
FileObject.close()
for record in FileData:
DetailsOfRecord = ["" for i in range(4)]
ExtractDetails(record, DetailsOfRecord)
if DetailsOfRecord[1] == DesiredItemDescription:
print("\nItem Code: " + str(DetailsOfRecord[0]))
print("Item Description: " + DetailsOfRecord[1])
print("Price of item: " + str(DetailsOfRecord[2]))
print("Number of the item in stock: " + str(DetailsOfRecord[3]))
###Output
_____no_output_____
###Markdown
TASK 2.4 (4)Output a list of all stock items with a price greater than a given amount.
###Code
print ("\nEnter the maximum threshold price.")
ThresholdPrice = GetPrice()
FileObject = open(RecordsFile, "r+")
FileData = FileObject.readlines()
FileObject.close()
for record in FileData:
DetailsOfRecord = ["" for i in range(4)]
ExtractDetails(record, DetailsOfRecord)
if float(DetailsOfRecord[2]) < ThresholdPrice:
print("\nItem Code: " + str(DetailsOfRecord[0]))
print("Item Description: " + DetailsOfRecord[1])
print("Price of item: " + str(DetailsOfRecord[2]))
print("Number of the item in stock: " + str(DetailsOfRecord[3]))
###Output
_____no_output_____
###Markdown
Standalone Compliled ProgramThe above cells demonstrate how each individual aspect of each task works. The code in the cell below is for every task combined into one, and can run independently.
###Code
## Arrays which are supposed to be pre-populated
ItemCode = ["1001", "6056", "5557", "2568", "4458"]
ItemDescription = ["Pencil", "Pen", "Notebook", "Ruler", "Compass"]
Price = [1.0, 10.0, 100.0, 20.0, 30.0]
NumberInStock = [100, 100, 50, 20, 20]
## Constant for the initial number of element (pre-defined)
n = len(ItemCode)
## Constant for the name of the file
RecordsFile = "Item Records.txt"
## Open file for "APPEND" and assign the I/O reference to a variable
FileObject = open(RecordsFile, "a")
## Subroutine to input a valid item code
def GetItemCode():
TestItemCode = int(input("Enter the code of the item: "))
while not (TestItemCode > 1000 and TestItemCode < 9999):
TestItemCode = int(input("Re-enter the code of the item: "))
return TestItemCode
## Subroutine to input a valid number of the item in stock
def GetNumberInStock():
TestNumberInStock = int(input("Enter the number of the item in stock: "))
while not (TestNumberInStock >= 0):
TestNumberInStock = int(input("Re-enter the number of the item in stock: "))
return TestNumberInStock
## Subroutine to input a valid item price
def GetPrice():
TestPrice = float(input("Enter the price of the item: "))
while not (TestPrice >= 0):
TestPrice = float(input("Re-enter the price of the item: "))
return TestPrice
## Subroutine to extract details of a given record string into an array
def ExtractDetails(RecordString, Details):
Position = 0
SearchString = RecordString.strip() + ':'
if RecordString != "":
for Counter in range(4):
Position += 1
CurrentCharacter = SearchString[Position : Position + 1]
while CurrentCharacter != ':':
Details[Counter] += CurrentCharacter
Position += 1
CurrentCharacter = SearchString[Position : Position + 1]
## TASK 1.4
ThresholdLevel = int(input("Enter the minumum stock level: "))
for Counter in range(n):
if NumberInStock[Counter] < ThresholdLevel:
print("\nItem Code:", ItemCode[Counter])
print("Item Description:", ItemDescription[Counter])
print("Price:", Price[Counter])
print("Number in stock:", NumberInStock[Counter])
## TASK 2.2
WriteString = ""
NewItemCode = int(input("\nEnter item code: "))
WriteString = ':' + str(NewItemCode)
NewItemDescription = input("Enter item description: ")
WriteString += ':' + NewItemDescription
NewPrice = float(input("Enter new price: "))
WriteString += ':' + str(NewPrice)
NewNumberInStock = int(input("Enter the number of items in stock: "))
WriteString += ':' + str(NewNumberInStock) + '\n'
FileObject.write(WriteString)
print("")
## TAKS 2.4 (1)
WriteString = ""
WriteString = ':' + str(GetItemCode())
NewItemDescription = input("Enter item description: ")
WriteString += ':' + NewItemDescription
WriteString += ':' + str(GetPrice())
WriteString += ':' + str(GetNumberInStock()) + '\n'
FileObject.write(WriteString)
## Close the file and save changes
FileObject.close()
## Open the file in "READ" mode
FileObject = open(RecordsFile, "r")
## Read data from the file into an array. They are also split using the newline delimiter '\n'.
FileData = FileObject.readlines()
## Close the file
FileObject.close()
## TASK 2.4 (2)
Found = False
print("\nEnter the code of the item you want to search for.")
DesiredItemCode = GetItemCode()
for record in FileData:
if record[1:5] == str(DesiredItemCode):
Found = True
break
if Found:
DetailsOfRecord = ["" for i in range(4)]
ExtractDetails(record, DetailsOfRecord)
print("\nItem Code: " + str(DetailsOfRecord[0]))
print("Item Description: " + DetailsOfRecord[1])
print("Price of item: " + str(DetailsOfRecord[2]))
print("Number of the item in stock: " + str(DetailsOfRecord[3]))
else:
print("Item not found.")
## TASK 2.4 (3)
DesiredItemDescription = input("\nEnter the description of the item you want to search for: ")
for record in FileData:
DetailsOfRecord = ["" for i in range(4)]
ExtractDetails(record, DetailsOfRecord)
if DetailsOfRecord[1] == DesiredItemDescription:
print("\nItem Code: " + str(DetailsOfRecord[0]))
print("Item Description: " + DetailsOfRecord[1])
print("Price of item: " + str(DetailsOfRecord[2]))
print("Number of the item in stock: " + str(DetailsOfRecord[3]))
## TASK 2.4 (4)
print ("\nEnter the maximum threshold price.")
ThresholdPrice = GetPrice()
for record in FileData:
DetailsOfRecord = ["" for i in range(4)]
ExtractDetails(record, DetailsOfRecord)
if float(DetailsOfRecord[2]) < ThresholdPrice:
print("\nItem Code: " + str(DetailsOfRecord[0]))
print("Item Description: " + DetailsOfRecord[1])
print("Price of item: " + str(DetailsOfRecord[2]))
print("Number of the item in stock: " + str(DetailsOfRecord[3]))
###Output
_____no_output_____ |
Tutorial-Template_Style_Guide.ipynb | ###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); The NRPy+ Jupyter Tutorial Style Guide / Template Authors: Brandon Clark, Zach Etienne, & First Last Formatting improvements courtesy Brandon Clark **This is a warning message, in red text and bolded, to warn anyone using the module that it is, for example, actively in development, not yet validated, etc. Warning messages are optional.** This module implements a template designed by Brandon Clark to be used as a style guide for all tutorial notebooks within NRPy+. Items in Markdown code contained within "" are not included within the output (double click this box to see what I mean). To the run Markdown code simply hit "Shift + Enter" or the "Run" button above. **This text discusses how a module has been validated against other existing code or modules. This text is given a green font color and bolded. See how to bold and make text different colors in the Markdown code.** NRPy+ Source Code for this module:1. [Template_Style_Guide.py](../edit/Template_Style_Guide.py); [\[**tutorial**\]](Tutorial-Template_Style_Guide.ipynb) This is where you would describe what purpose this source code serves in this module. Read how to correctly link to these source code files/tutorial notebooks later in []. 1. Introduction:Here you write an introduction that discusses in slight detail the framework of this tutorial notebook. Here you may reference external works or websites on which pieces of your module rely. It is often helpful to include an enumerated algorihtm to highlight this modules processes. Within the algortihm you may refer to where source code is implemeted as a part of this module. The entire algorithm is outlined below, with NRPy+-based components highlighted in green.1. Constructing a Table of Contents1. 1. Discussing [Markdown Linking Protocol](https://medium.com/@sambozek/ipython-er-jupyter-table-of-contents-69bb72cf39d3) 1. Linking to sections internally within the module 1. Linking to external sources1. No parts of this template tutorial notebook rely on NRPy+-based components1. Converting Jupyter notebook to output LaTex PDFYou could also write your introduction to include subsections preceded by . introduction subsection:Include information relevant to this subsection here. Other (Optional): You may include any number of items here within the first box of the tutorial notebook, but I suggest being minimalistic when you can. Other sections that have been included in other tutorial moudles are as follows Note on Notation:When using a new type of notation for the first time within the NRPy+ tutorial, you may want to include some notes on that here. Citations:This is a great place to list out the references you link to within the module with actual citations. Table of Contents$$\label{toc}$$This notebook is organized as follows0. [Preliminaries](prelim): This is an optional section1. [Step 1](linking): The Markdown Linking Protocol 1. [Step 1.a](internal_links) Internal linking with the Jupyter notebook, Table of Contents 1. [Step 1.b](external_links): External linking outside of Jupyter notebook 1. [Step 1.b.i](nrpy_links): Linking to other files/modules within NRPy+ 1. [Step 2](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF fileThe Table of Contents (ToC) plays a significant role in the formatting of your module. The above ToC is for this module, but I have constructed it in a way such that you should see all of the important details for any module you need to write. If you choose to include a preliminaries section, enumerate it with the "0." All other sections, subsections, and sub-subsections can be enumerated with the 1. Jupyter/LaTex will handle there own numerbing/lettering scheme. It is important when creating subsections and sub-subsections that you indent seen in the Markdown code. The text colors vary for the level section you're assigning within the Markdown code. When writing within the brackets to specify a step number, the following scheme is to be used:* Header Sections: Step 1, Step 2, Step 3* Subsections: Step 1.a, Step 1.b, Step 1.c* Sub-subsections: Step 1.a.i, 1.a.ii, 1.a.iii If for some reason you go more then three levels deep in your sectioning, I would suggest finding a way to reorganize your sectioning to prevent that, or ask Zach Etienne what the next level of labelling for Steps should be. We will talk about the other components within the Markdown Code for the ToC in [Step 1.a](internal_links). The only text within the ToC section of this module should be the ToC code itself and what precedes it.I also suggest that the titles for the Steps you include here following the ":" match the titles you use throughout your module. Preliminaries: This is an optional section \[Back to [top](toc)\]$$\label{prelim}$$ This section is a great chance to include textual verbage that might have been too specific for the introduction section, but serves as a beneficial setup to the remainder of the module. For instance, you may want to define quantities here, express important equations, and so on. I suggest that the Preliminaries section is not followed by any Python code blocks, and remains simply a block of information for users to refer back to. Step 1: The Markdown Linking Protocol \[Back to [top](toc)\]$$\label{linking}$$We have already within this template had to link to sources both internally within this module, exteranlly to other components of the NRPy+ tutorial, as well as externally to additional web sources. The next few sections discuss how this is done. It is important to know that any linking is down by combining brackets and parentheses "\[ \]()" with the desired input in each. On another note, main sections like this have their titles preoceed by a single . As you will see, for every deeper layer of sectioning, an addiotnal is appended on, reducing the size of the text. Step 1.a: Internal linking with the Jupyter notebook, Table of Contents \[Back to [top](toc)\]$$\label{internal_links}$$A great resource for how to construct a Table of Contents is:https://medium.com/@sambozek/ipython-er-jupyter-table-of-contents-69bb72cf39d3The Table of Contents is a family of internal links. To link internally we first have to specify an ***anchor tag*** which is the text within the parenthese of preceded by a (See ToC Markdown code). For instance, the anchor tag for this subsection is "internal_links". So, for a particular Step within the Table of Contents you specify the Step title in brackets (e.g., [Step 1.a]), appended by the anchor tag in parentheses preced by a (e.g., (internal_links)), followed by a ":" and the Step description (e.g., : Internal linking with the Jupyter notebook, Table of Contents). Look at the Markdown code for the Table of Contents for a few examples. **Important Note**: The anchor tags cannot be anything that you want. Anchor tags must be entirely lowercase and contain no spaces. Numbers are fine as well as underscores, but not capitalization. I suggest making the anchor tags have siginificant meaning to the section there tied to, instead of making one that reads "step1a". The reason I say this, is because if you ever need to resection your module, the tags won't all need to be chnaged as well if you give each one a unqiue name. All we have done so far is establish anchor tags and clickable links within the Table of Contents, but how do we establish the link to the specific section within the module. Opening up the Markdown code for this section you will see a line of code above the title, and a line of code directly below the title. These are the answers to the question. Each section requires these components to be included for both the Jupyter notebook and LaTex internal linking. Make sure the top line of the Markdown code has a space between it and the title. Similarly, the code directly beneath the title needs space below it as well, separated from the main body of text (see above in Markdown code).**Important Note**: Links do not work unless the two sections which are linked have been run.The Table of Contents is now linked to this section and you may have already noticed but this section, and all others, are linked back to the Table of Contents using the Markdown code in line at the end of the section title. This is exceedingly convenient for modules of great length. It may also be convenient when you're in a particular subsection and you wish to just return to the header section. This is accomplished using a bracket parentheses \[\]() pairing like so (see this in Markdown code). Go back to [Step 1](linking)Lastly, you would more often than not write a code block below implementing what was discussed in this section. This isn't always necessary, some header sections plainly serve as a set up for subsections that will contain all of the necessary coding components.
###Code
# This is the code block corresponding to Step 1.a: Internal linking within the Jupyter notebook, Table of Contents
print("We have successfully learned how to code internal links using Markdown Linking Protocol!!!")
###Output
We have successfully learned how to code internal links using Markdown Linking Protocol!!!
###Markdown
Step 1.b: External linking outside of this module \[Back to [top](toc)\]$$\label{external_links}$$To link outside of this particular module we still use bracket parentheses \[ \]() pairings. Since the links are not internal, we no longer need the symbol and anchor tags. Instead, you need an actual link. For instance, look at your Markdown code to see how we link this [website](https://medium.com/@sambozek/ipython-er-jupyter-table-of-contents-69bb72cf39d3) to a line of text. Of course, web links will simply work on there own as a hyperlink, but often you may need to link to multiple external sources and do not want all of the individual addresses clogging up the body of your text.
###Code
# This is the code block for Step 1.b: External linking outside of Jupyter notebook
print("Be efficient in how you link external sources, utilize []() pairs!!!")
###Output
Be efficient in how you link external sources, utilize []() pairs!!!
###Markdown
Step 1.b.i: Linking to other files/modules within NRPy+ \[Back to [top](toc)\]$$\label{nrpy_links}$$Other useful extranal sources we would like to link to are the existing files/moudles within NRPy+. To do this we again resort to the \[ \]() pair. By simply typing the file name into the parentheses, you can connect to another [Tutorial module](Tutorial-Template_Style_Guide.ipynb) (see Markdown). To access a .py file, you want to type the command ../edit/followed by the file location. For instance, here is the [.py file](../edit/Template_Style_Guide.py) for this notebook (see Markdown).
###Code
# This is the code block for Step 1.b.i: Linking to other files/modules within NRPy+
print("Template_Style_Guide.py is an empty file...")
###Output
Template_Style_Guide.py is an empty file...
###Markdown
Step 2: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-Template_Style_Guide.pdf](Tutorial-Template_Style_Guide.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)**Important Note**: Make sure that the file name is right in all six locations, two here in the Markdown, four in the code below. * Tutorial-Template_Style_Guide.pdf* Tutorial-Template_Style_Guide.ipynb* Tutorial-Template_Style_Guide.tex
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-Template_Style_Guide.ipynb
!pdflatex -interaction=batchmode Tutorial-Template_Style_Guide.tex
!pdflatex -interaction=batchmode Tutorial-Template_Style_Guide.tex
!pdflatex -interaction=batchmode Tutorial-Template_Style_Guide.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); The NRPy+ Jupyter Tutorial Style Guide / Template Authors: Brandon Clark, Zach Etienne, & First Last Formatting improvements courtesy Brandon Clark **This is a warning message, in red text and bolded, to warn anyone using the module that it is, for example, actively in development, not yet validated, etc. Warning messages are optional.** This module implements a template designed by Brandon Clark to be used as a style guide for all tutorial notebooks within NRPy+. Items in Markdown code contained within "" are not included within the output (double click this box to see what I mean). To the run Markdown code simply hit "Shift + Enter" or the "Run" button above. **This text discusses how a module has been validated against other existing code or modules. This text is given a green font color and bolded. See how to bold and make text different colors in the Markdown code.** NRPy+ Source Code for this module:1. [Template_Style_Guide.py](../edit/Template_Style_Guide.py); [\[**tutorial**\]](Tutorial-Template_Style_Guide.ipynb) This is where you would describe what purpose this source code serves in this module. Read how to correctly link to these source code files/tutorial notebooks later in []. 1. Introduction:Here you write an introduction that discusses in slight detail the framework of this tutorial notebook. Here you may reference external works or websites on which pieces of your module rely. It is often helpful to include an enumerated algorihtm to highlight this modules processes. Within the algortihm you may refer to where source code is implemeted as a part of this module. The entire algorithm is outlined below, with NRPy+-based components highlighted in green.1. Constructing a Table of Contents1. 1. Discussing [Markdown Linking Protocol](https://medium.com/@sambozek/ipython-er-jupyter-table-of-contents-69bb72cf39d3) 1. Linking to sections internally within the module 1. Linking to external sources1. No parts of this template tutorial notebook rely on NRPy+-based components1. Converting Jupyter notebook to output LaTex PDFYou could also write your introduction to include subsections preceded by . introduction subsection:Include information relevant to this subsection here. Other (Optional): You may include any number of items here within the first box of the tutorial notebook, but I suggest being minimalistic when you can. Other sections that have been included in other tutorial moudles are as follows Note on Notation:When using a new type of notation for the first time within the NRPy+ tutorial, you may want to include some notes on that here. Citations:This is a great place to list out the references you link to within the module with actual citations. Table of Contents$$\label{toc}$$This notebook is organized as follows0. [Preliminaries](prelim): This is an optional section1. [Step 1](linking): The Markdown Linking Protocol 1. [Step 1.a](internal_links) Internal linking with the Jupyter notebook, Table of Contents 1. [Step 1.b](external_links): External linking outside of Jupyter notebook 1. [Step 1.b.i](nrpy_links): Linking to other files/modules within NRPy+ 1. [Step 2](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF fileThe Table of Contents (ToC) plays a significant role in the formatting of your module. The above ToC is for this module, but I have constructed it in a way such that you should see all of the important details for any module you need to write. If you choose to include a preliminaries section, enumerate it with the "0." All other sections, subsections, and sub-subsections can be enumerated with the 1. Jupyter/LaTex will handle there own numerbing/lettering scheme. It is important when creating subsections and sub-subsections that you indent seen in the Markdown code. The text colors vary for the level section you're assigning within the Markdown code. When writing within the brackets to specify a step number, the following scheme is to be used:* Header Sections: Step 1, Step 2, Step 3* Subsections: Step 1.a, Step 1.b, Step 1.c* Sub-subsections: Step 1.a.i, 1.a.ii, 1.a.iii If for some reason you go more then three levels deep in your sectioning, I would suggest finding a way to reorganize your sectioning to prevent that, or ask Zach Etienne what the next level of labelling for Steps should be. We will talk about the other components within the Markdown Code for the ToC in [Step 1.a](internal_links). The only text within the ToC section of this module should be the ToC code itself and what precedes it.I also suggest that the titles for the Steps you include here following the ":" match the titles you use throughout your module. Preliminaries: This is an optional section \[Back to [top](toc)\]$$\label{prelim}$$ This section is a great chance to include textual verbage that might have been too specific for the introduction section, but serves as a beneficial setup to the remainder of the module. For instance, you may want to define quantities here, express important equations, and so on. I suggest that the Preliminaries section is not followed by any Python code blocks, and remains simply a block of information for users to refer back to. Step 1: The Markdown Linking Protocol \[Back to [top](toc)\]$$\label{linking}$$We have already within this template had to link to sources both internally within this module, exteranlly to other components of the NRPy+ tutorial, as well as externally to additional web sources. The next few sections discuss how this is done. It is important to know that any linking is down by combining brackets and parentheses "\[ \]()" with the desired input in each. On another note, main sections like this have their titles preoceed by a single . As you will see, for every deeper layer of sectioning, an addiotnal is appended on, reducing the size of the text. Step 1.a: Internal linking with the Jupyter notebook, Table of Contents \[Back to [top](toc)\]$$\label{internal_links}$$A great resource for how to construct a Table of Contents is:https://medium.com/@sambozek/ipython-er-jupyter-table-of-contents-69bb72cf39d3The Table of Contents is a family of internal links. To link internally we first have to specify an ***anchor tag*** which is the text within the parenthese of preceded by a (See ToC Markdown code). For instance, the anchor tag for this subsection is "internal_links". So, for a particular Step within the Table of Contents you specify the Step title in brackets (e.g., [Step 1.a]), appended by the anchor tag in parentheses preced by a (e.g., (internal_links)), followed by a ":" and the Step description (e.g., : Internal linking with the Jupyter notebook, Table of Contents). Look at the Markdown code for the Table of Contents for a few examples. **Important Note**: The anchor tags cannot be anything that you want. Anchor tags must be entirely lowercase and contain no spaces. Numbers are fine as well as underscores, but not capitalization. I suggest making the anchor tags have siginificant meaning to the section there tied to, instead of making one that reads "step1a". The reason I say this, is because if you ever need to resection your module, the tags won't all need to be chnaged as well if you give each one a unqiue name. All we have done so far is establish anchor tags and clickable links within the Table of Contents, but how do we establish the link to the specific section within the module. Opening up the Markdown code for this section you will see a line of code above the title, and a line of code directly below the title. These are the answers to the question. Each section requires these components to be included for both the Jupyter notebook and LaTex internal linking. Make sure the top line of the Markdown code has a space between it and the title. Similarly, the code directly beneath the title needs space below it as well, separated from the main body of text (see above in Markdown code).**Important Note**: Links do not work unless the two sections which are linked have been run.The Table of Contents is now linked to this section and you may have already noticed but this section, and all others, are linked back to the Table of Contents using the Markdown code in line at the end of the section title. This is exceedingly convenient for modules of great length. It may also be convenient when you're in a particular subsection and you wish to just return to the header section. This is accomplished using a bracket parentheses \[\]() pairing like so (see this in Markdown code). Go back to [Step 1](linking)Lastly, you would more often than not write a code block below implementing what was discussed in this section. This isn't always necessary, some header sections plainly serve as a set up for subsections that will contain all of the necessary coding components.
###Code
# This is the code block corresponding to Step 1.a: Internal linking within the Jupyter notebook, Table of Contents
print("We have successfully learned how to code internal links using Markdown Linking Protocol!!!")
###Output
We have successfully learned how to code internal links using Markdown Linking Protocol!!!
###Markdown
Step 1.b: External linking outside of this module \[Back to [top](toc)\]$$\label{external_links}$$To link outside of this particular module we still use bracket parentheses \[ \]() pairings. Since the links are not internal, we no longer need the symbol and anchor tags. Instead, you need an actual link. For instance, look at your Markdown code to see how we link this [website](https://medium.com/@sambozek/ipython-er-jupyter-table-of-contents-69bb72cf39d3) to a line of text. Of course, web links will simply work on there own as a hyperlink, but often you may need to link to multiple external sources and do not want all of the individual addresses clogging up the body of your text.
###Code
# This is the code block for Step 1.b: External linking outside of Jupyter notebook
print("Be efficient in how you link external sources, utilize []() pairs!!!")
###Output
Be efficient in how you link external sources, utilize []() pairs!!!
###Markdown
Step 1.b.i: Linking to other files/modules within NRPy+ \[Back to [top](toc)\]$$\label{nrpy_links}$$Other useful extranal sources we would like to link to are the existing files/moudles within NRPy+. To do this we again resort to the \[ \]() pair. By simply typing the file name into the parentheses, you can connect to another [Tutorial module](Tutorial-Template_Style_Guide.ipynb) (see Markdown). To access a .py file, you want to type the command ../edit/followed by the file location. For instance, here is the [.py file](../edit/Template_Style_Guide.py) for this notebook (see Markdown).
###Code
# This is the code block for Step 1.b.i: Linking to other files/modules within NRPy+
print("Template_Style_Guide.py is an empty file...")
###Output
Template_Style_Guide.py is an empty file...
###Markdown
Step 2: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-Template_Style_Guide.pdf](Tutorial-Template_Style_Guide.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)**Important Note**: Make sure that the file name is right in all six locations, two here in the Markdown, four in the code below. * Tutorial-Template_Style_Guide.pdf* Tutorial-Template_Style_Guide.ipynb* Tutorial-Template_Style_Guide.tex
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-Template_Style_Guide.ipynb
!pdflatex -interaction=batchmode Tutorial-Template_Style_Guide.tex
!pdflatex -interaction=batchmode Tutorial-Template_Style_Guide.tex
!pdflatex -interaction=batchmode Tutorial-Template_Style_Guide.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
[NbConvertApp] Converting notebook Tutorial-Template_Style_Guide.ipynb to latex
[NbConvertApp] Writing 37877 bytes to Tutorial-Template_Style_Guide.tex
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); The NRPy+ Jupyter Tutorial Style Guide / Template Authors: Brandon Clark, Zach Etienne, & First Last Formatting improvements courtesy Brandon Clark**This is a warning message, in red text and bolded, to warn anyone using the module that it is, for example, actively in development, not yet validated, etc. Warning messages are optional.** This module implements a template designed by Brandon Clark to be used as a style guide for all tutorial notebooks within NRPy+. Items in Markdown code contained within "" are not included within the output (double click this box to see what I mean). To the run Markdown code simply hit "Shift + Enter" or the "Run" button above. **This text discusses how a module has been validated against other existing code or modules. This text is given a green font color and bolded. See how to bold and make text different colors in the Markdown code.** NRPy+ Source Code for this module:1. [Template_Style_Guide.py](../edit/Template_Style_Guide.py); [\[**tutorial**\]](Tutorial-Template_Style_Guide.ipynb) This is where you would describe what purpose this source code serves in this module. Read how to correctly link to these source code files/tutorial notebooks later in []. 1. Introduction:Here you write an introduction that discusses in slight detail the framework of this tutorial notebook. Here you may reference external works or websites on which pieces of your module rely. It is often helpful to include an enumerated algorihtm to highlight this modules processes. Within the algortihm you may refer to where source code is implemeted as a part of this module. The entire algorithm is outlined below, with NRPy+-based components highlighted in green.1. Constructing a Table of Contents1. 1. Discussing [Markdown Linking Protocol](https://medium.com/@sambozek/ipython-er-jupyter-table-of-contents-69bb72cf39d3) 1. Linking to sections internally within the module 1. Linking to external sources1. No parts of this template tutorial notebook rely on NRPy+-based components1. Converting Jupyter notebook to output LaTex PDFYou could also write your introduction to include subsections preceded by . introduction subsection:Include information relevant to this subsection here. Other (Optional): You may include any number of items here within the first box of the tutorial notebook, but I suggest being minimalistic when you can. Other sections that have been included in other tutorial moudles are as follows Note on Notation:When using a new type of notation for the first time within the NRPy+ tutorial, you may want to include some notes on that here. Citations:This is a great place to list out the references you link to within the module with actual citations. Table of Contents$$\label{toc}$$This notebook is organized as follows0. [Preliminaries](prelim): This is an optional section1. [Step 1](linking): The Markdown Linking Protocol 1. [Step 1.a](internal_links) Internal linking with the Jupyter notebook, Table of Contents 1. [Step 1.b](external_links): External linking outside of Jupyter notebook 1. [Step 1.b.i](nrpy_links): Linking to other files/modules within NRPy+ 1. [Step 2](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF fileThe Table of Contents (ToC) plays a significant role in the formatting of your module. The above ToC is for this module, but I have constructed it in a way such that you should see all of the important details for any module you need to write. If you choose to include a preliminaries section, enumerate it with the "0." All other sections, subsections, and sub-subsections can be enumerated with the 1. Jupyter/LaTex will handle there own numerbing/lettering scheme. It is important when creating subsections and sub-subsections that you indent seen in the Markdown code. The text colors vary for the level section you're assigning within the Markdown code. When writing within the brackets to specify a step number, the following scheme is to be used:* Header Sections: Step 1, Step 2, Step 3* Subsections: Step 1.a, Step 1.b, Step 1.c* Sub-subsections: Step 1.a.i, 1.a.ii, 1.a.iii If for some reason you go more then three levels deep in your sectioning, I would suggest finding a way to reorganize your sectioning to prevent that, or ask Zach Etienne what the next level of labelling for Steps should be. We will talk about the other components within the Markdown Code for the ToC in [Step 1.a](internal_links). The only text within the ToC section of this module should be the ToC code itself and what precedes it.I also suggest that the titles for the Steps you include here following the ":" match the titles you use throughout your module. Preliminaries: This is an optional section \[Back to [top](toc)\]$$\label{prelim}$$ This section is a great chance to include textual verbage that might have been too specific for the introduction section, but serves as a beneficial setup to the remainder of the module. For instance, you may want to define quantities here, express important equations, and so on. I suggest that the Preliminaries section is not followed by any Python code blocks, and remains simply a block of information for users to refer back to. Step 1: The Markdown Linking Protocol \[Back to [top](toc)\]$$\label{linking}$$We have already within this template had to link to sources both internally within this module, exteranlly to other components of the NRPy+ tutorial, as well as externally to additional web sources. The next few sections discuss how this is done. It is important to know that any linking is down by combining brackets and parentheses "\[ \]()" with the desired input in each. On another note, main sections like this have their titles preoceed by a single . As you will see, for every deeper layer of sectioning, an addiotnal is appended on, reducing the size of the text. Step 1.a: Internal linking with the Jupyter notebook, Table of Contents \[Back to [top](toc)\]$$\label{internal_links}$$A great resource for how to construct a Table of Contents is:https://medium.com/@sambozek/ipython-er-jupyter-table-of-contents-69bb72cf39d3The Table of Contents is a family of internal links. To link internally we first have to specify an ***anchor tag*** which is the text within the parenthese of preceded by a (See ToC Markdown code). For instance, the anchor tag for this subsection is "internal_links". So, for a particular Step within the Table of Contents you specify the Step title in brackets (e.g., [Step 1.a]), appended by the anchor tag in parentheses preced by a (e.g., (internal_links)), followed by a ":" and the Step description (e.g., : Internal linking with the Jupyter notebook, Table of Contents). Look at the Markdown code for the Table of Contents for a few examples. **Important Note**: The anchor tags cannot be anything that you want. Anchor tags must be entirely lowercase and contain no spaces. Numbers are fine as well as underscores, but not capitalization. I suggest making the anchor tags have siginificant meaning to the section there tied to, instead of making one that reads "step1a". The reason I say this, is because if you ever need to resection your module, the tags won't all need to be chnaged as well if you give each one a unqiue name. All we have done so far is establish anchor tags and clickable links within the Table of Contents, but how do we establish the link to the specific section within the module. Opening up the Markdown code for this section you will see a line of code above the title, and a line of code directly below the title. These are the answers to the question. Each section requires these components to be included for both the Jupyter notebook and LaTex internal linking. Make sure the top line of the Markdown code has a space between it and the title. Similarly, the code directly beneath the title needs space below it as well, separated from the main body of text (see above in Markdown code).**Important Note**: Links do not work unless the two sections which are linked have been run.The Table of Contents is now linked to this section and you may have already noticed but this section, and all others, are linked back to the Table of Contents using the Markdown code in line at the end of the section title. This is exceedingly convenient for modules of great length. It may also be convenient when you're in a particular subsection and you wish to just return to the header section. This is accomplished using a bracket parentheses \[\]() pairing like so (see this in Markdown code). Go back to [Step 1](linking)Lastly, you would more often than not write a code block below implementing what was discussed in this section. This isn't always necessary, some header sections plainly serve as a set up for subsections that will contain all of the necessary coding components.
###Code
# This is the code block corresponding to Step 1.a: Internal linking within the Jupyter notebook, Table of Contents
print("We have successfully learned how to code internal links using Markdown Linking Protocol!!!")
###Output
We have successfully learned how to code internal links using Markdown Linking Protocol!!!
###Markdown
Step 1.b: External linking outside of this module \[Back to [top](toc)\]$$\label{external_links}$$To link outside of this particular module we still use bracket parentheses \[ \]() pairings. Since the links are not internal, we no longer need the symbol and anchor tags. Instead, you need an actual link. For instance, look at your Markdown code to see how we link this [website](https://medium.com/@sambozek/ipython-er-jupyter-table-of-contents-69bb72cf39d3) to a line of text. Of course, web links will simply work on there own as a hyperlink, but often you may need to link to multiple external sources and do not want all of the individual addresses clogging up the body of your text.
###Code
# This is the code block for Step 1.b: External linking outside of Jupyter notebook
print("Be efficient in how you link external sources, utilize []() pairs!!!")
###Output
Be efficient in how you link external sources, utilize []() pairs!!!
###Markdown
Step 1.b.i: Linking to other files/modules within NRPy+ \[Back to [top](toc)\]$$\label{nrpy_links}$$Other useful extranal sources we would like to link to are the existing files/moudles within NRPy+. To do this we again resort to the \[ \]() pair. By simply typing the file name into the parentheses, you can connect to another [Tutorial module](Tutorial-Template_Style_Guide.ipynb) (see Markdown). To access a .py file, you want to type the command ../edit/followed by the file location. For instance, here is the [.py file](../edit/Template_Style_Guide.py) for this notebook (see Markdown).
###Code
# This is the code block for Step 1.b.i: Linking to other files/modules within NRPy+
print("Template_Style_Guide.py is an empty file...")
###Output
Template_Style_Guide.py is an empty file...
###Markdown
Step 2: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-Template_Style_Guide.pdf](Tutorial-Template_Style_Guide.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)**Important Note**: Make sure that the file name is right in all six locations, two here in the Markdown, four in the code below. * Tutorial-Template_Style_Guide.pdf* Tutorial-Template_Style_Guide.ipynb* Tutorial-Template_Style_Guide.tex
###Code
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-Template_Style_Guide")
###Output
Created Tutorial-Template_Style_Guide.tex, and compiled LaTeX file to PDF
file Tutorial-Template_Style_Guide.pdf
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); The NRPy+ Jupyter Tutorial Style Guide / Template Authors: Brandon Clark, Zach Etienne, & First Last Formatting improvements courtesy Brandon Clark**This is a warning message, in red text and bolded, to warn anyone using the module that it is, for example, actively in development, not yet validated, etc. Warning messages are optional.** This module implements a template designed by Brandon Clark to be used as a style guide for all tutorial notebooks within NRPy+. Items in Markdown code contained within "" are not included within the output (double click this box to see what I mean). To the run Markdown code simply hit "Shift + Enter" or the "Run" button above. **This text discusses how a module has been validated against other existing code or modules. This text is given a green font color and bolded. See how to bold and make text different colors in the Markdown code.** NRPy+ Source Code for this module:1. [Template_Style_Guide.py](../edit/Template_Style_Guide.py); [\[**tutorial**\]](Tutorial-Template_Style_Guide.ipynb) This is where you would describe what purpose this source code serves in this module. Read how to correctly link to these source code files/tutorial notebooks later in []. 1. Introduction:Here you write an introduction that discusses in slight detail the framework of this tutorial notebook. Here you may reference external works or websites on which pieces of your module rely. It is often helpful to include an enumerated algorithm to highlight this modules processes. Within the algorithm you may refer to where source code is implemented as a part of this module. The entire algorithm is outlined below, with NRPy+-based components highlighted in green.1. Constructing a Table of Contents1. 1. Discussing [Markdown Linking Protocol](https://medium.com/@sambozek/ipython-er-jupyter-table-of-contents-69bb72cf39d3) 1. Linking to sections internally within the module 1. Linking to external sources1. No parts of this template tutorial notebook rely on NRPy+-based components1. Converting Jupyter notebook to output LaTex PDFYou could also write your introduction to include subsections preceded by . introduction subsection:Include information relevant to this subsection here. Other (Optional): You may include any number of items here within the first box of the tutorial notebook, but I suggest being minimalistic when you can. Other sections that have been included in other tutorial modules are as follows Note on Notation:When using a new type of notation for the first time within the NRPy+ tutorial, you may want to include some notes on that here. Citations:This is a great place to list out the references you link to within the module with actual citations. Table of Contents$$\label{toc}$$This notebook is organized as follows0. [Preliminaries](prelim): This is an optional section1. [Step 1](linking): The Markdown Linking Protocol 1. [Step 1.a](internal_links) Internal linking with the Jupyter notebook, Table of Contents 1. [Step 1.b](external_links): External linking outside of Jupyter notebook 1. [Step 1.b.i](nrpy_links): Linking to other files/modules within NRPy+ 1. [Step 2](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF fileThe Table of Contents (ToC) plays a significant role in the formatting of your module. The above ToC is for this module, but I have constructed it in a way such that you should see all of the important details for any module you need to write. If you choose to include a preliminaries section, enumerate it with the "0." All other sections, subsections, and sub-subsections can be enumerated with the 1. Jupyter/LaTex will handle there own numbering/lettering scheme. It is important when creating subsections and sub-subsections that you indent seen in the Markdown code. The text colors vary for the level section you're assigning within the Markdown code. When writing within the brackets to specify a step number, the following scheme is to be used:* Header Sections: Step 1, Step 2, Step 3* Subsections: Step 1.a, Step 1.b, Step 1.c* Sub-subsections: Step 1.a.i, 1.a.ii, 1.a.iii If for some reason you go more then three levels deep in your sectioning, I would suggest finding a way to reorganize your sectioning to prevent that, or ask Zach Etienne what the next level of labeling for Steps should be. We will talk about the other components within the Markdown Code for the ToC in [Step 1.a](internal_links). The only text within the ToC section of this module should be the ToC code itself and what precedes it.I also suggest that the titles for the Steps you include here following the ":" match the titles you use throughout your module. Preliminaries: This is an optional section \[Back to [top](toc)\]$$\label{prelim}$$ This section is a great chance to include textual verbiage that might have been too specific for the introduction section, but serves as a beneficial setup to the remainder of the module. For instance, you may want to define quantities here, express important equations, and so on. I suggest that the Preliminaries section is not followed by any Python code blocks, and remains simply a block of information for users to refer back to. Step 1: The Markdown Linking Protocol \[Back to [top](toc)\]$$\label{linking}$$We have already within this template had to link to sources both internally within this module, externally to other components of the NRPy+ tutorial, as well as externally to additional web sources. The next few sections discuss how this is done. It is important to know that any linking is down by combining brackets and parentheses "\[ \]()" with the desired input in each. On another note, main sections like this have their titles preceded by a single . As you will see, for every deeper layer of sectioning, an additional is appended on, reducing the size of the text. Step 1.a: Internal linking with the Jupyter notebook, Table of Contents \[Back to [top](toc)\]$$\label{internal_links}$$A great resource for how to construct a Table of Contents is:https://medium.com/@sambozek/ipython-er-jupyter-table-of-contents-69bb72cf39d3The Table of Contents is a family of internal links. To link internally we first have to specify an ***anchor tag*** which is the text within the parentheses of preceded by a (See ToC Markdown code). For instance, the anchor tag for this subsection is "internal_links". So, for a particular Step within the Table of Contents you specify the Step title in brackets (e.g., [Step 1.a]), appended by the anchor tag in parentheses preceded by a (e.g., (internal_links)), followed by a ":" and the Step description (e.g., : Internal linking with the Jupyter notebook, Table of Contents). Look at the Markdown code for the Table of Contents for a few examples. **Important Note**: The anchor tags cannot be anything that you want. Anchor tags must be entirely lowercase and contain no spaces. Numbers are fine as well as underscores, but not capitalization. I suggest making the anchor tags have significant meaning to the section there tied to, instead of making one that reads "step1a". The reason I say this, is because if you ever need to resection your module, the tags won't all need to be changed as well if you give each one a unique name. All we have done so far is establish anchor tags and clickable links within the Table of Contents, but how do we establish the link to the specific section within the module. Opening up the Markdown code for this section you will see a line of code above the title, and a line of code directly below the title. These are the answers to the question. Each section requires these components to be included for both the Jupyter notebook and LaTex internal linking. Make sure the top line of the Markdown code has a space between it and the title. Similarly, the code directly beneath the title needs space below it as well, separated from the main body of text (see above in Markdown code).**Important Note**: Links do not work unless the two sections which are linked have been run.The Table of Contents is now linked to this section and you may have already noticed but this section, and all others, are linked back to the Table of Contents using the Markdown code in line at the end of the section title. This is exceedingly convenient for modules of great length. It may also be convenient when you're in a particular subsection and you wish to just return to the header section. This is accomplished using a bracket parentheses \[\]() pairing like so (see this in Markdown code). Go back to [Step 1](linking)Lastly, you would more often than not write a code block below implementing what was discussed in this section. This isn't always necessary, some header sections plainly serve as a set up for subsections that will contain all of the necessary coding components.
###Code
# This is the code block corresponding to Step 1.a: Internal linking within the Jupyter notebook, Table of Contents
print("We have successfully learned how to code internal links using Markdown Linking Protocol!!!")
###Output
We have successfully learned how to code internal links using Markdown Linking Protocol!!!
###Markdown
Step 1.b: External linking outside of this module \[Back to [top](toc)\]$$\label{external_links}$$To link outside of this particular module we still use bracket parentheses \[ \]() pairings. Since the links are not internal, we no longer need the symbol and anchor tags. Instead, you need an actual link. For instance, look at your Markdown code to see how we link this [website](https://medium.com/@sambozek/ipython-er-jupyter-table-of-contents-69bb72cf39d3) to a line of text. Of course, web links will simply work on there own as a hyperlink, but often you may need to link to multiple external sources and do not want all of the individual addresses clogging up the body of your text.
###Code
# This is the code block for Step 1.b: External linking outside of Jupyter notebook
print("Be efficient in how you link external sources, utilize []() pairs!!!")
###Output
Be efficient in how you link external sources, utilize []() pairs!!!
###Markdown
Step 1.b.i: Linking to other files/modules within NRPy+ \[Back to [top](toc)\]$$\label{nrpy_links}$$Other useful external sources we would like to link to are the existing files/modules within NRPy+. To do this we again resort to the \[ \]() pair. By simply typing the file name into the parentheses, you can connect to another [Tutorial module](Tutorial-Template_Style_Guide.ipynb) (see Markdown). To access a .py file, you want to type the command ../edit/followed by the file location. For instance, here is the [.py file](../edit/Template_Style_Guide.py) for this notebook (see Markdown).
###Code
# This is the code block for Step 1.b.i: Linking to other files/modules within NRPy+
print("Template_Style_Guide.py is an empty file...")
###Output
Template_Style_Guide.py is an empty file...
###Markdown
Step 2: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-Template_Style_Guide.pdf](Tutorial-Template_Style_Guide.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)**Important Note**: Make sure that the file name is right in all six locations, two here in the Markdown, four in the code below. * Tutorial-Template_Style_Guide.pdf* Tutorial-Template_Style_Guide.ipynb* Tutorial-Template_Style_Guide.tex
###Code
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-Template_Style_Guide")
###Output
Created Tutorial-Template_Style_Guide.tex, and compiled LaTeX file to PDF
file Tutorial-Template_Style_Guide.pdf
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); The NRPy+ Jupyter Tutorial Style Guide / Template Authors: Brandon Clark, Zach Etienne, & First Last Formatting improvements courtesy Brandon Clark**This is a warning message, in red text and bolded, to warn anyone using the module that it is, for example, actively in development, not yet validated, etc. Warning messages are optional.** This module implements a template designed by Brandon Clark to be used as a style guide for all tutorial notebooks within NRPy+. Items in Markdown code contained within "" are not included within the output (double click this box to see what I mean). To the run Markdown code simply hit "Shift + Enter" or the "Run" button above. **This text discusses how a module has been validated against other existing code or modules. This text is given a green font color and bolded. See how to bold and make text different colors in the Markdown code.** NRPy+ Source Code for this module:1. [Template_Style_Guide.py](../edit/Template_Style_Guide.py); [\[**tutorial**\]](Tutorial-Template_Style_Guide.ipynb) This is where you would describe what purpose this source code serves in this module. Read how to correctly link to these source code files/tutorial notebooks later in []. 1. Introduction:Here you write an introduction that discusses in slight detail the framework of this tutorial notebook. Here you may reference external works or websites on which pieces of your module rely. It is often helpful to include an enumerated algorihtm to highlight this modules processes. Within the algortihm you may refer to where source code is implemeted as a part of this module. The entire algorithm is outlined below, with NRPy+-based components highlighted in green.1. Constructing a Table of Contents1. 1. Discussing [Markdown Linking Protocol](https://medium.com/@sambozek/ipython-er-jupyter-table-of-contents-69bb72cf39d3) 1. Linking to sections internally within the module 1. Linking to external sources1. No parts of this template tutorial notebook rely on NRPy+-based components1. Converting Jupyter notebook to output LaTex PDFYou could also write your introduction to include subsections preceded by . introduction subsection:Include information relevant to this subsection here. Other (Optional): You may include any number of items here within the first box of the tutorial notebook, but I suggest being minimalistic when you can. Other sections that have been included in other tutorial moudles are as follows Note on Notation:When using a new type of notation for the first time within the NRPy+ tutorial, you may want to include some notes on that here. Citations:This is a great place to list out the references you link to within the module with actual citations. Table of Contents$$\label{toc}$$This notebook is organized as follows0. [Preliminaries](prelim): This is an optional section1. [Step 1](linking): The Markdown Linking Protocol 1. [Step 1.a](internal_links) Internal linking with the Jupyter notebook, Table of Contents 1. [Step 1.b](external_links): External linking outside of Jupyter notebook 1. [Step 1.b.i](nrpy_links): Linking to other files/modules within NRPy+ 1. [Step 2](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF fileThe Table of Contents (ToC) plays a significant role in the formatting of your module. The above ToC is for this module, but I have constructed it in a way such that you should see all of the important details for any module you need to write. If you choose to include a preliminaries section, enumerate it with the "0." All other sections, subsections, and sub-subsections can be enumerated with the 1. Jupyter/LaTex will handle there own numerbing/lettering scheme. It is important when creating subsections and sub-subsections that you indent seen in the Markdown code. The text colors vary for the level section you're assigning within the Markdown code. When writing within the brackets to specify a step number, the following scheme is to be used:* Header Sections: Step 1, Step 2, Step 3* Subsections: Step 1.a, Step 1.b, Step 1.c* Sub-subsections: Step 1.a.i, 1.a.ii, 1.a.iii If for some reason you go more then three levels deep in your sectioning, I would suggest finding a way to reorganize your sectioning to prevent that, or ask Zach Etienne what the next level of labelling for Steps should be. We will talk about the other components within the Markdown Code for the ToC in [Step 1.a](internal_links). The only text within the ToC section of this module should be the ToC code itself and what precedes it.I also suggest that the titles for the Steps you include here following the ":" match the titles you use throughout your module. Preliminaries: This is an optional section \[Back to [top](toc)\]$$\label{prelim}$$ This section is a great chance to include textual verbage that might have been too specific for the introduction section, but serves as a beneficial setup to the remainder of the module. For instance, you may want to define quantities here, express important equations, and so on. I suggest that the Preliminaries section is not followed by any Python code blocks, and remains simply a block of information for users to refer back to. Step 1: The Markdown Linking Protocol \[Back to [top](toc)\]$$\label{linking}$$We have already within this template had to link to sources both internally within this module, exteranlly to other components of the NRPy+ tutorial, as well as externally to additional web sources. The next few sections discuss how this is done. It is important to know that any linking is down by combining brackets and parentheses "\[ \]()" with the desired input in each. On another note, main sections like this have their titles preoceed by a single . As you will see, for every deeper layer of sectioning, an addiotnal is appended on, reducing the size of the text. Step 1.a: Internal linking with the Jupyter notebook, Table of Contents \[Back to [top](toc)\]$$\label{internal_links}$$A great resource for how to construct a Table of Contents is:https://medium.com/@sambozek/ipython-er-jupyter-table-of-contents-69bb72cf39d3The Table of Contents is a family of internal links. To link internally we first have to specify an ***anchor tag*** which is the text within the parenthese of preceded by a (See ToC Markdown code). For instance, the anchor tag for this subsection is "internal_links". So, for a particular Step within the Table of Contents you specify the Step title in brackets (e.g., [Step 1.a]), appended by the anchor tag in parentheses preced by a (e.g., (internal_links)), followed by a ":" and the Step description (e.g., : Internal linking with the Jupyter notebook, Table of Contents). Look at the Markdown code for the Table of Contents for a few examples. **Important Note**: The anchor tags cannot be anything that you want. Anchor tags must be entirely lowercase and contain no spaces. Numbers are fine as well as underscores, but not capitalization. I suggest making the anchor tags have siginificant meaning to the section there tied to, instead of making one that reads "step1a". The reason I say this, is because if you ever need to resection your module, the tags won't all need to be chnaged as well if you give each one a unqiue name. All we have done so far is establish anchor tags and clickable links within the Table of Contents, but how do we establish the link to the specific section within the module. Opening up the Markdown code for this section you will see a line of code above the title, and a line of code directly below the title. These are the answers to the question. Each section requires these components to be included for both the Jupyter notebook and LaTex internal linking. Make sure the top line of the Markdown code has a space between it and the title. Similarly, the code directly beneath the title needs space below it as well, separated from the main body of text (see above in Markdown code).**Important Note**: Links do not work unless the two sections which are linked have been run.The Table of Contents is now linked to this section and you may have already noticed but this section, and all others, are linked back to the Table of Contents using the Markdown code in line at the end of the section title. This is exceedingly convenient for modules of great length. It may also be convenient when you're in a particular subsection and you wish to just return to the header section. This is accomplished using a bracket parentheses \[\]() pairing like so (see this in Markdown code). Go back to [Step 1](linking)Lastly, you would more often than not write a code block below implementing what was discussed in this section. This isn't always necessary, some header sections plainly serve as a set up for subsections that will contain all of the necessary coding components.
###Code
# This is the code block corresponding to Step 1.a: Internal linking within the Jupyter notebook, Table of Contents
print("We have successfully learned how to code internal links using Markdown Linking Protocol!!!")
###Output
We have successfully learned how to code internal links using Markdown Linking Protocol!!!
###Markdown
Step 1.b: External linking outside of this module \[Back to [top](toc)\]$$\label{external_links}$$To link outside of this particular module we still use bracket parentheses \[ \]() pairings. Since the links are not internal, we no longer need the symbol and anchor tags. Instead, you need an actual link. For instance, look at your Markdown code to see how we link this [website](https://medium.com/@sambozek/ipython-er-jupyter-table-of-contents-69bb72cf39d3) to a line of text. Of course, web links will simply work on there own as a hyperlink, but often you may need to link to multiple external sources and do not want all of the individual addresses clogging up the body of your text.
###Code
# This is the code block for Step 1.b: External linking outside of Jupyter notebook
print("Be efficient in how you link external sources, utilize []() pairs!!!")
###Output
Be efficient in how you link external sources, utilize []() pairs!!!
###Markdown
Step 1.b.i: Linking to other files/modules within NRPy+ \[Back to [top](toc)\]$$\label{nrpy_links}$$Other useful extranal sources we would like to link to are the existing files/moudles within NRPy+. To do this we again resort to the \[ \]() pair. By simply typing the file name into the parentheses, you can connect to another [Tutorial module](Tutorial-Template_Style_Guide.ipynb) (see Markdown). To access a .py file, you want to type the command ../edit/followed by the file location. For instance, here is the [.py file](../edit/Template_Style_Guide.py) for this notebook (see Markdown).
###Code
# This is the code block for Step 1.b.i: Linking to other files/modules within NRPy+
print("Template_Style_Guide.py is an empty file...")
###Output
Template_Style_Guide.py is an empty file...
###Markdown
Step 2: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-Template_Style_Guide.pdf](Tutorial-Template_Style_Guide.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)**Important Note**: Make sure that the file name is right in all six locations, two here in the Markdown, four in the code below. * Tutorial-Template_Style_Guide.pdf* Tutorial-Template_Style_Guide.ipynb* Tutorial-Template_Style_Guide.tex
###Code
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-Template_Style_Guide")
###Output
Created Tutorial-Template_Style_Guide.tex, and compiled LaTeX file to PDF
file Tutorial-Template_Style_Guide.pdf
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); The NRPy+ Jupyter Tutorial Style Guide / Template Authors: Brandon Clark, Zach Etienne, & First Last Formatting improvements courtesy Brandon Clark **This is a warning message, in red text and bolded, to warn anyone using the module that it is, for example, actively in development, not yet validated, etc. Warning messages are optional.** This module implements a template designed by Brandon Clark to be used as a style guide for all tutorial notebooks within NRPy+. Items in Markdown code contained within "" are not included within the output (double click this box to see what I mean). To the run Markdown code simply hit "Shift + Enter" or the "Run" button above. **This text discusses how a module has been validated against other existing code or modules. This text is given a green font color and bolded. See how to bold and make text different colors in the Markdown code.** NRPy+ Source Code for this module:1. [Template_Style_Guide.py](../edit/Template_Style_Guide.py); [\[**tutorial**\]](Tutorial-Template_Style_Guide.ipynb) This is where you would describe what purpose this source code serves in this module. Read how to correctly link to these source code files/tutorial notebooks later in []. 1. Introduction:Here you write an introduction that discusses in slight detail the framework of this tutorial notebook. Here you may reference external works or websites on which pieces of your module rely. It is often helpful to include an enumerated algorihtm to highlight this modules processes. Within the algortihm you may refer to where source code is implemeted as a part of this module. The entire algorithm is outlined below, with NRPy+-based components highlighted in green.1. Constructing a Table of Contents1. 1. Discussing [Markdown Linking Protocol](https://medium.com/@sambozek/ipython-er-jupyter-table-of-contents-69bb72cf39d3) 1. Linking to sections internally within the module 1. Linking to external sources1. No parts of this template tutorial notebook rely on NRPy+-based components1. Converting Jupyter notebook to output LaTex PDFYou could also write your introduction to include subsections preceded by . introduction subsection:Include information relevant to this subsection here. Other (Optional): You may include any number of items here within the first box of the tutorial notebook, but I suggest being minimalistic when you can. Other sections that have been included in other tutorial moudles are as follows Note on Notation:When using a new type of notation for the first time within the NRPy+ tutorial, you may want to include some notes on that here. Citations:This is a great place to list out the references you link to within the module with actual citations. Table of Contents$$\label{toc}$$This notebook is organized as follows0. [Preliminaries](prelim): This is an optional section1. [Step 1](linking): The Markdown Linking Protocol 1. [Step 1.a](internal_links) Internal linking with the Jupyter notebook, Table of Contents 1. [Step 1.b](external_links): External linking outside of Jupyter notebook 1. [Step 1.b.i](nrpy_links): Linking to other files/modules within NRPy+ 1. [Step 2](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF fileThe Table of Contents (ToC) plays a significant role in the formatting of your module. The above ToC is for this module, but I have constructed it in a way such that you should see all of the important details for any module you need to write. If you choose to include a preliminaries section, enumerate it with the "0." All other sections, subsections, and sub-subsections can be enumerated with the 1. Jupyter/LaTex will handle there own numerbing/lettering scheme. It is important when creating subsections and sub-subsections that you indent seen in the Markdown code. The text colors vary for the level section you're assigning within the Markdown code. When writing within the brackets to specify a step number, the following scheme is to be used:* Header Sections: Step 1, Step 2, Step 3* Subsections: Step 1.a, Step 1.b, Step 1.c* Sub-subsections: Step 1.a.i, 1.a.ii, 1.a.iii If for some reason you go more then three levels deep in your sectioning, I would suggest finding a way to reorganize your sectioning to prevent that, or ask Zach Etienne what the next level of labelling for Steps should be. We will talk about the other components within the Markdown Code for the ToC in [Step 1.a](internal_links). The only text within the ToC section of this module should be the ToC code itself and what precedes it.I also suggest that the titles for the Steps you include here following the ":" match the titles you use throughout your module. Preliminaries: This is an optional section \[Back to [top](toc)\]$$\label{prelim}$$ This section is a great chance to include textual verbage that might have been too specific for the introduction section, but serves as a beneficial setup to the remainder of the module. For instance, you may want to define quantities here, express important equations, and so on. I suggest that the Preliminaries section is not followed by any Python code blocks, and remains simply a block of information for users to refer back to. Step 1: The Markdown Linking Protocol \[Back to [top](toc)\]$$\label{linking}$$We have already within this template had to link to sources both internally within this module, exteranlly to other components of the NRPy+ tutorial, as well as externally to additional web sources. The next few sections discuss how this is done. It is important to know that any linking is down by combining brackets and parentheses "\[ \]()" with the desired input in each. On another note, main sections like this have their titles preoceed by a single . As you will see, for every deeper layer of sectioning, an addiotnal is appended on, reducing the size of the text. Step 1.a: Internal linking with the Jupyter notebook, Table of Contents \[Back to [top](toc)\]$$\label{internal_links}$$A great resource for how to construct a Table of Contents is:https://medium.com/@sambozek/ipython-er-jupyter-table-of-contents-69bb72cf39d3The Table of Contents is a family of internal links. To link internally we first have to specify an ***anchor tag*** which is the text within the parenthese of preceded by a (See ToC Markdown code). For instance, the anchor tag for this subsection is "internal_links". So, for a particular Step within the Table of Contents you specify the Step title in brackets (e.g., [Step 1.a]), appended by the anchor tag in parentheses preced by a (e.g., (internal_links)), followed by a ":" and the Step description (e.g., : Internal linking with the Jupyter notebook, Table of Contents). Look at the Markdown code for the Table of Contents for a few examples. **Important Note**: The anchor tags cannot be anything that you want. Anchor tags must be entirely lowercase and contain no spaces. Numbers are fine as well as underscores, but not capitalization. I suggest making the anchor tags have siginificant meaning to the section there tied to, instead of making one that reads "step1a". The reason I say this, is because if you ever need to resection your module, the tags won't all need to be chnaged as well if you give each one a unqiue name. All we have done so far is establish anchor tags and clickable links within the Table of Contents, but how do we establish the link to the specific section within the module. Opening up the Markdown code for this section you will see a line of code above the title, and a line of code directly below the title. These are the answers to the question. Each section requires these components to be included for both the Jupyter notebook and LaTex internal linking. Make sure the top line of the Markdown code has a space between it and the title. Similarly, the code directly beneath the title needs space below it as well, separated from the main body of text (see above in Markdown code).**Important Note**: Links do not work unless the two sections which are linked have been run.The Table of Contents is now linked to this section and you may have already noticed but this section, and all others, are linked back to the Table of Contents using the Markdown code in line at the end of the section title. This is exceedingly convenient for modules of great length. It may also be convenient when you're in a particular subsection and you wish to just return to the header section. This is accomplished using a bracket parentheses \[\]() pairing like so (see this in Markdown code). Go back to [Step 1](linking)Lastly, you would more often than not write a code block below implementing what was discussed in this section. This isn't always necessary, some header sections plainly serve as a set up for subsections that will contain all of the necessary coding components.
###Code
# This is the code block corresponding to Step 1.a: Internal linking within the Jupyter notebook, Table of Contents
print("We have successfully learned how to code internal links using Markdown Linking Protocol!!!")
###Output
We have successfully learned how to code internal links using Markdown Linking Protocol!!!
###Markdown
Step 1.b: External linking outside of this module \[Back to [top](toc)\]$$\label{external_links}$$To link outside of this particular module we still use bracket parentheses \[ \]() pairings. Since the links are not internal, we no longer need the symbol and anchor tags. Instead, you need an actual link. For instance, look at your Markdown code to see how we link this [website](https://medium.com/@sambozek/ipython-er-jupyter-table-of-contents-69bb72cf39d3) to a line of text. Of course, web links will simply work on there own as a hyperlink, but often you may need to link to multiple external sources and do not want all of the individual addresses clogging up the body of your text.
###Code
# This is the code block for Step 1.b: External linking outside of Jupyter notebook
print("Be efficient in how you link external sources, utilize []() pairs!!!")
###Output
Be efficient in how you link external sources, utilize []() pairs!!!
###Markdown
Step 1.b.i: Linking to other files/modules within NRPy+ \[Back to [top](toc)\]$$\label{nrpy_links}$$Other useful extranal sources we would like to link to are the existing files/moudles within NRPy+. To do this we again resort to the \[ \]() pair. By simply typing the file name into the parentheses, you can connect to another [Tutorial module](Tutorial-Template_Style_Guide.ipynb) (see Markdown). To access a .py file, you want to type the command ../edit/followed by the file location. For instance, here is the [.py file](../edit/Template_Style_Guide.py) for this notebook (see Markdown).
###Code
# This is the code block for Step 1.b.i: Linking to other files/modules within NRPy+
print("Template_Style_Guide.py is an empty file...")
###Output
Template_Style_Guide.py is an empty file...
###Markdown
Step 2: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-Template_Style_Guide.pdf](Tutorial-Template_Style_Guide.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)**Important Note**: Make sure that the file name is right in all six locations, two here in the Markdown, four in the code below. * Tutorial-Template_Style_Guide.pdf* Tutorial-Template_Style_Guide.ipynb* Tutorial-Template_Style_Guide.tex
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-Template_Style_Guide.ipynb
!pdflatex -interaction=batchmode Tutorial-Template_Style_Guide.tex
!pdflatex -interaction=batchmode Tutorial-Template_Style_Guide.tex
!pdflatex -interaction=batchmode Tutorial-Template_Style_Guide.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); The NRPy+ Jupyter Tutorial Style Guide / Template Authors: Brandon Clark, Zach Etienne, & First Last Formatting improvements courtesy Brandon Clark **This is a warning message, in red text and bolded, to warn anyone using the module that it is, for example, actively in development, not yet validated, etc. Warning messages are optional.** This module implements a template designed by Brandon Clark to be used as a style guide for all tutorial modules within NRPy+. Items in Markdown code contained within "" are not included within the output (double click this box to see what I mean). To the run Markdown code simply hit "Shift + Enter" or the "Run" button above. **This text discusses how a module has been validated against other existing code or modules. This text is given a green font color and bolded. See how to bold and make text different colors in the Markdown code.** NRPy+ Source Code for this module:1. [Template_Style_Guide.py](../edit/Template_Style_Guide.py); [\[**tutorial**\]](Tutorial-Template_Style_Guide.ipynb) This is where you would describe what purpose this source code serves in this module. Read how to correctly link to these source code files/tutorial modules later in []. 1. Introduction:Here you write an introduction that discusses in slight detail the framework of this tutorial module. Here you may reference external works or websites on which pieces of your module rely. It is often helpful to include an enumerated algorihtm to highlight this modules processes. Within the algortihm you may refer to where source code is implemeted as a part of this module. The entire algorithm is outlined below, with NRPy+-based components highlighted in green.1. Constructing a Table of Contents1. 1. Discussing [Markdown Linking Protocol](https://medium.com/@sambozek/ipython-er-jupyter-table-of-contents-69bb72cf39d3) 1. Linking to sections internally within the the module 1. Linking to external sources1. No parts of this template tutorial module rely on NRPy+-based components1. Converting Jupyter notebook to output LaTex PDFYou could also write your introduction to include subsections preceded by . introduction subsection:Include information relevant to this subsection here. Other (Optional): You may include any number of items here within the first box of the tutorial module, but I suggest being minimalistic when you can. Other sections that have been included in other tutorial moudles are as follows Note on Notation:When using a new type of notation for the first time within the NRPy+ tutorial, you may want to include some notes on that here. Citations:This is a great place to list out the references you link to within the module with actual citations. Table of Contents$$\label{toc}$$This module is organized as follows0. [Preliminaries](prelim): This is an optional section1. [Step 1](linking): The Markdown Linking Protocol 1. [Step 1.a](internal_links) Internal linking with the Jupyter notebook, Table of Contents 1. [Step 1.b](external_links): External linking outside of Jupyter notebook 1. [Step 1.b.i](nrpy_links): Linking to other files/modules within NRPy+ 1. [Step 2](latex_pdf_output): Output this module to $\LaTeX$-formatted PDF fileThe Table of Contents (ToC) plays a significant role in the formatting of your module. The above ToC is for this module, but I have constructed it in a way such that you should see all of the important details for any module you need to write. If you choose to include a preliminaries section, enumerate it with the "0." All other sections, subsections, and sub-subsections can be enumerated with the 1. Jupyter/LaTex will handle there own numerbing/lettering scheme. It is important when creating subsections and sub-subsections that you indent seen in the Markdown code. The text colors vary for the level section you're assigning within the Markdown code. When writing within the brackets to specify a step number, the following scheme is to be used:* Header Sections: Step 1, Step 2, Step 3* Subsections: Step 1.a, Step 1.b, Step 1.c* Sub-subsections: Step 1.a.i, 1.a.ii, 1.a.iii If for some reason you go more then three levels deep in your sectioning, I would suggest finding a way to reorganize your sectioning to prevent that, or ask Zach Etienne what the next level of labelling for Steps should be. We will talk about the other components within the Markdown Code for the ToC in [Step 1.a](internal_links). The only text within the ToC section of this module should be the ToC code itself and what precedes it.I also suggest that the titles for the Steps you include here following the ":" match the titles you use throughout your module. Preliminaries: This is an optional section \[Back to [top](toc)\]$$\label{prelim}$$ This section is a great chance to include textual verbage that might have been too specific for the introduction section, but serves as a beneficial setup to the remainder of the module. For instance, you may want to define quantities here, express important equations, and so on. I suggest that the Preliminaries section is not followed by any Python code blocks, and remains simply a block of information for users to refer back to. Step 1: The Markdown Linking Protocol \[Back to [top](toc)\]$$\label{linking}$$We have already within this template had to link to sources both internally within this module, exteranlly to other components of the NRPy+ tutorial, as well as externally to additional web sources. The next few sections discuss how this is done. It is important to know that any linking is down by combining brackets and parentheses "\[ \]()" with the desired input in each. On another note, main sections like this have their titles preoceed by a single . As you will see, for every deeper layer of sectioning, an addiotnal is appended on, reducing the size of the text. Step 1.a: Internal linking with the Jupyter notebook, Table of Contents \[Back to [top](toc)\]$$\label{internal_links}$$A great resource for how to construct a Table of Contents is:https://medium.com/@sambozek/ipython-er-jupyter-table-of-contents-69bb72cf39d3The Table of Contents is a family of internal links. To link internally we first have to specify an ***anchor tag*** which is the text within the parenthese of preceded by a (See ToC Markdown code). For instance, the anchor tag for this subsection is "internal_links". So, for a particular Step within the Table of Contents you specify the Step title in brackets (e.g., [Step 1.a]), appended by the anchor tag in parentheses preced by a (e.g., (internal_links)), followed by a ":" and the Step description (e.g., : Internal linking with the Jupyter notebook, Table of Contents). Look at the Markdown code for the Table of Contents for a few examples. **Important Note**: The anchor tags cannot be anything that you want. Anchor tags must be entirely lowercase and contain no spaces. Numbers are fine as well as underscores, but not capitalization. I suggest making the anchor tags have siginificant meaning to the section there tied to, instead of making one that reads "step1a". The reason I say this, is because if you ever need to resection your module, the tags won't all need to be chnaged as well if you give each one a unqiue name. All we have done so far is establish anchor tags and clickable links within the Table of Contents, but how do we establish the link to the specific section within the module. Opening up the Markdown code for this section you will see a line of code above the title, and a line of code directly below the title. These are the answers to the question. Each section requires these components to be included for both the Jupyter notebook and LaTex internal linking. Make sure the top line of the Markdown code has a space between it and the title. Similarly, the code directly beneath the title needs space below it as well, separated from the main body of text (see above in Markdown code).**Important Note**: Links do not work unless the two sections which are linked have been run.The Table of Contents is now linked to this section and you may have already noticed but this section, and all others, are linked back to the Table of Contents using the Markdown code in line at the end of the section title. This is exceedingly convenient for modules of great length. It may also be convenient when you're in a particular subsection and you wish to just return to the header section. This is accomplished using a bracket parentheses \[\]() pairing like so (see this in Markdown code). Go back to [Step 1](linking)Lastly, you would more often than not write a code block below implementing what was discussed in this section. This isn't always necessary, some header sections plainly serve as a set up for subsections that will contain all of the necessary coding components.
###Code
# This is the code block corresponding to Step 1.a: Internal linking within the Jupyter notebook, Table of Contents
print("We have successfully learned how to code internal links using Markdown Linking Protocol!!!")
###Output
We have successfully learned how to code internal links using Markdown Linking Protocol!!!
###Markdown
Step 1.b: External linking outside of this module \[Back to [top](toc)\]$$\label{external_links}$$To link outside of this particular module we still use bracket parentheses \[ \]() pairings. Since the links are not internal, we no longer need the symbol and anchor tags. Instead, you need an actual link. For instance, look at your Markdown code to see how we link this [website](https://medium.com/@sambozek/ipython-er-jupyter-table-of-contents-69bb72cf39d3) to a line of text. Of course, web links will simply work on there own as a hyperlink, but often you may need to link to multiple external sources and do not want all of the individual addresses clogging up the body of your text.
###Code
# This is the code block for Step 1.b: External linking outside of Jupyter notebook
print("Be efficient in how you link external sources, utilize []() pairs!!!")
###Output
Be effcient in how you link external sources, utilize []() pairs!!!
###Markdown
Step 1.b.i: Linking to other files/modules within NRPy+ \[Back to [top](toc)\]$$\label{nrpy_links}$$Other useful extranal sources we would like to link to are the existing files/moudles within NRPy+. To do this we again resort to the \[ \]() pair. By simply typing the file name into the parentheses, you can connect to another [Tutorial module](Tutorial-Template_Style_Guide.ipynb) (see Markdown). To access a .py file, you want to type the command ../edit/followed by the file location. For instance, here is the [.py file](../edit/Template_Style_Guide.py) for this notebook (see Markdown).
###Code
# This is the code block for Step 1.b.i: Linking to other files/modules within NRPy+
print("Template_Style_Guide.py is an empty file...")
###Output
Template_Style_Guide.py is an empty file...
###Markdown
Step 2: Output this module to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-Template_Style_Guide.pdf](Tutorial-Template_Style_Guide.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)**Important Note**: Make sure that the file name is right in all six locations, two here in the Markdown, four in the code below. * Tutorial-Template_Style_Guide.pdf* Tutorial-Template_Style_Guide.ipynb* Tutorial-Template_Style_Guide.tex
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx Tutorial-Template_Style_Guide.ipynb
!pdflatex -interaction=batchmode Tutorial-Template_Style_Guide.tex
!pdflatex -interaction=batchmode Tutorial-Template_Style_Guide.tex
!pdflatex -interaction=batchmode Tutorial-Template_Style_Guide.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
[NbConvertApp] Converting notebook Tutorial-Template_Style_Guide.ipynb to latex
[NbConvertApp] Writing 37801 bytes to Tutorial-Template_Style_Guide.tex
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
|
1. Mapping ribosome profiling and RNA-seq data.ipynb | ###Markdown
1. Mapping ribosome profiling and RNA-seq dataThese steps collate information from transcriptome assemblies, RNA-seq and ribosome profiling raw data to produce ".trpedf" files (details described below) which will be used in all subsequent data preprocessing steps. General Software RequirementsA \*nix-based system: Bowtie2, Tophat, Cufflinks, BedtoolsPython 2.7 (with Numpy, Scipy, Pandas, ViennaRNA, Statsmodels, Biopython; everything but ViennaRNA can also be run on Windows) Raw Data SourceRibosome profiling and RNA-seq data obtained from NCBI's Sequence Read Archive (SRA) by downloading the corresponding .SRA files. Genome and transcriptome annotations were obtained from Illumina iGenomes.**Sample****Ribosome Profiling Data****RNA-Seq Data****Genome Assembly****Transcriptome**Human HeLa cellsSRR970587, SRR970588SRR970592, SRR970593GRCh37Ensembl 70Mouse ES cellsSRR315616, SRR315617, SRR315618, SRR315619SRR315595, SRR315596GRCm38Ensembl 70Zebrafish Shield stageSRR836196SRR2047225Zv9Ensembl 70 SRA files were converted to fastq files and clipped of 3' ligation adapter sequences "CTGTAGGCACCATCAAT", retaining reads >= 25 nucleotides. Mapping Ribosome Profiling DataRibosome profiling reads were first depleted of abundent sequences such as rRNA using Bowtie2. Abundant sequences were compiled from the AbundantSequences directory from Illumina iGenomes, and built into a Bowtie2 index. Additional manually-curated rRNA sequences for zebrafish were used, and are included in the supplementary data files.Example using Mouse ES cell ribosome profiling data below:
###Code
%%bash
cat *.fa > ./annotations/GRCm38_Abundant.fa
bowtie2-build ./annotations/GRCm38_Abundant.fa GRCm38_Abundant
OPTIONS="-N 0 -L 23 --norc"
bowtie2 $OPTIONS --un ribo_mES_sub_abund.fastq -x GRCm38_Abundant -U ribo_mES.fastq -S /dev/null
###Output
_____no_output_____
###Markdown
Remaining reads were mapped to the Ensembl 70 transcriptome using Tophat, allowing no indels, junctions only from gene annotations, max 10 multihits, with multihit pre-filtering. Use the .gtf files and Bowtie2 indices ("genome") from the corresponding iGenomes compilation.
###Code
%%bash
OPTIONS="--max-insertion-length 0 --max-deletion-length 0\
--no-novel-juncs -g 10 --prefilter-multihits\
--library-type fr-secondstrand"
tophat -o ribo_mES $OPTIONS -G genes.gtf genome ribo_mES_sub_abund.fastq
###Output
_____no_output_____
###Markdown
Mapping RNA-Seq DataRNA-seq reads (when libraries were constructed by 3' ligation) were mapped by Tophat using the following parameters:
###Code
%%bash
OPTIONS="--no-novel-juncs --library-type=fr-secondstrand"
tophat -o mRNA_mES $OPTIONS -G genes.gtf genome mRNA_mES.fastq
###Output
_____no_output_____
###Markdown
Quantification of RNA-seq data was done using Cufflinks; accepted_hits.bam is from the Tophat output:
###Code
%%bash
OPTIONS="-b genome.fa --multi-read-correct --library-type=fr-secondstrand"
cufflinks ${OPTIONS} -o cuffdiff/ -G genes.gtf accepted_hits.bam
###Output
_____no_output_____
###Markdown
Assembling Canonical TranscriptomeTo generate a list of transcripts that only use one "canonical" transcript isoform per gene, ensGtp tables for each vertebrate species were retrieved from the UCSC genome browser. BED files were generated from the refFlat files in the iGenomes compilation, using the following awk script:
###Code
# ASSEMBLY="GRCm38_ens"
# Save following script as file [refFlat_to_bed12.awk]
# run as 'refFlat_to_bed12.awk refFlat.txt > ./annotations/$ASSEMBLY_genes.bed'
#!/bin/awk -f
BEGIN {FS="\t"; OFS="\t"}
{ blockSizes="";
blockStarts="";
split($10,exonStarts,",");
split($11,exonEnds,",");
for (i=1; i<=$9; i++)
{ blockSizes=blockSizes exonEnds[i]-exonStarts[i] ",";
blockStarts=blockStarts exonStarts[i]-$5 ",";
}
blockSizes = substr(blockSizes,1,length(blockSizes)-1);
blockStarts = substr(blockStarts,1,length(blockStarts)-1);
print $3,$5,$6,$2,0,$4,$7,$8,"0,0,0",$9,blockSizes,blockStarts;
}
###Output
_____no_output_____
###Markdown
Canonical transcriptome contain one transcript per gene: the transcript with the longest CDS, then longest 5' UTR, then longest transcript length.With the BED file and ensGtp file in the same directory, the following python script was run to generate "\$ASSEMBLY_genes_canonical.bed", which is the transcript subset of \$ASSEMBLY_genes.bed with one transcript per gene.Upload the file to UCSC as a custom track and use it to obtain the corresponding fasta file. Alternatively, the transcriptome fasta file can be obtained from a local whole genome fasta file using bedtools getfasta (a.k.a. getFastaFromBed)
###Code
ANNOTATIONS_DIR = "./annotations/"
DATA_DIR = "./data/"
ASSEMBLY = "GRCm38_ens"
def transcript_position(exons, c_intron_lengths, genomic_pos):
for exon, c_intron_length in zip(exons, c_intron_lengths):
if genomic_pos >= exon[0] and genomic_pos <= exon[1]:
return genomic_pos - exons[0][0] - c_intron_length
def read_Gtp_file(Gtp_file):
transcript_to_gene = {}
gene_to_transcript = {}
with open(Gtp_file, "r+") as f:
for line in f:
entry = line.strip().split("\t")
transcript_to_gene[entry[1]] = entry[0]
gene_to_transcript.setdefault(entry[0], []).append(entry[1])
return transcript_to_gene, gene_to_transcript
#%% OPEN FILES
with open(ANNOTATIONS_DIR + ASSEMBLY + "_genes.bed", "r+") as in_bed, \
open(ANNOTATIONS_DIR + ASSEMBLY + "_genes_canonical.bed", "w+") as out_bed:
#%% READ GTP FILE
transcript_to_gene, gene_to_transcript = read_Gtp_file(ANNOTATIONS_DIR + ASSEMBLY + "Gtp")
#%% READ BED FILES
bed_store = {}
for line in in_bed:
_, chromStart, chromEnd, name, \
_, strand, thickStart, thickEnd, \
_, blockCount, blockSizes, blockStarts = line.split("\t")
if name not in transcript_to_gene:
continue
#%% CONVERT ENTRY TO INTEGERS
chromStart, chromEnd, thickStart, thickEnd, blockCount = map(int,
(chromStart, chromEnd,
thickStart, thickEnd, blockCount))
blockSizes = map(int, blockSizes.split(","))
blockStarts = map(int, blockStarts.split(","))
#%% SECONDARY DATA FOR CALCULATIONS
intron_lengths = [(blockStarts[i+1]-blockStarts[i]-blockSizes[i]) for i in xrange(blockCount-1)]
c_intron_lengths = [sum(intron_lengths[:i]) for i in xrange(blockCount)]
exons = [[i[0] + chromStart, sum(i) + chromStart] for i in zip(blockStarts, blockSizes)]
plus_strand = (strand == "+")
#%% CALCULATE LENGTHS: TRANSCRIPT, 5'LEADER, CDS, 3'UTR
transcript_length = sum(blockSizes)
if plus_strand:
UTR5_length = abs(transcript_position(exons, c_intron_lengths, thickStart)\
- transcript_position(exons, c_intron_lengths, chromStart))
UTR3_length = abs(transcript_position(exons, c_intron_lengths, chromEnd)\
- transcript_position(exons, c_intron_lengths, thickEnd))
else:
UTR5_length = abs(transcript_position(exons, c_intron_lengths, chromEnd)\
- transcript_position(exons, c_intron_lengths, thickEnd))
UTR3_length = abs(transcript_position(exons, c_intron_lengths, thickStart)\
- transcript_position(exons, c_intron_lengths, chromStart))
CDS_length = abs(transcript_position(exons, c_intron_lengths, thickEnd)\
- transcript_position(exons, c_intron_lengths, thickStart))
bed_store[name] = [line, CDS_length, UTR5_length, transcript_length]
#%% FIND TRANSCRIPT WITH LONGEST CDS, THEN LONGEST 5' LEADER, THEN LONGEST TRANSCRIPT LENGTH, PER GENE, OUTPUT
for gene in gene_to_transcript:
try:
canonical_transcript = sorted([[transcript, bed_store[transcript][1],
bed_store[transcript][2],
bed_store[transcript][3]] \
for transcript in gene_to_transcript[gene] \
if transcript in bed_store], key=lambda i: (i[1], i[2], i[3]))[-1][0]
except IndexError:
continue
out_bed.write(bed_store[canonical_transcript][0])
###Output
_____no_output_____
###Markdown
Integrating ribosome profiling, RNA-seq data in transcript coordinatesFor data analysis, a custom file-format is used that integrates RNA-seq and ribosome profiling data in the context of a defined transcriptome.Ribosome profiling data first needs to be assembled at nucleotide resolution. Note that the offsets correspond to P-site, rather than A-site.Use either of the following awk scripts as part of the conversion of .bam files (accepted_hits.bam from Tophat).
###Code
# Create file as bed12_to_bedpoint_mammal.awk, for use with human and mouse ribosome profiling data
#!/bin/awk -f
BEGIN {OFS="\t"}
{if ($10 != 1){
split($11,a,",");\
split($12,b,",");\
len=0;\
for (i in a){len+= a[i]}
}
else
{len=$11}
out=(len>=29 && len<=35);\
strand=$6;\
if (out){
if(strand=="+"){
if(len == 29) offset = 12;\
else if(len == 30) offset = 12;\
else if(len == 31) offset = 13;\
else if(len == 32) offset = 13;\
else if(len == 33) offset = 13;\
else if(len == 34) offset = 14;\
else if(len == 35) offset = 14;\
}
else{ if(len == 29) offset = 16;\
else if(len == 30) offset = 17;\
else if(len == 31) offset = 17;\
else if(len == 32) offset = 18;\
else if(len == 33) offset = 19;\
else if(len == 34) offset = 19;\
else if(len == 35) offset = 20;\
}
}
if(out && ($10 == 1)){print $1, $2+offset, $2+offset+1, $4, $5, $6}
else if(out){
for (i in a){
if (offset <= a[i] && offset > 0){print $1, $2+offset+b[i], $2+offset+b[i]+1, $4, $5, $6}
offset -= a[i];\
}
}
}
# Create file as bed12_to_bedpoint_zf.awk, for use with zebrafish ribosome profiling data
#!/bin/awk -f
BEGIN {OFS="\t"}
{if ($10 != 1){
split($11,a,",");\
split($12,b,",");\
len=0;\
for (i in a){len+= a[i]}
}
else
{len=$11}
out=(len>=27 && len<=32);\
strand=$6;\
if (out){
if(strand=="+"){
if(len == 27) offset = 11;\
else if(len == 28) offset = 11;\
else if(len == 29) offset = 12;\
else if(len == 30) offset = 12;\
else if(len == 31) offset = 12;\
else if(len == 32) offset = 13;\
}
else{ if(len == 27) offset = 15;\
else if(len == 28) offset = 16;\
else if(len == 29) offset = 16;\
else if(len == 30) offset = 17;\
else if(len == 31) offset = 18;\
else if(len == 32) offset = 18;\
}
}
if(out && ($10 == 1)){print $1, $2+offset, $2+offset+1, $4, $5, $6}
else if(out){
for (i in a){
if (offset <= a[i] && offset > 0){print $1, $2+offset+b[i], $2+offset+b[i]+1, $4, $5, $6}
offset -= a[i];\
}
}
}
###Output
_____no_output_____
###Markdown
Running the following commands (from BedTools) generates the ".in" files that will be used for creating the ".trpedf" files in subsequent analyses, as well as strand-specific bedgraph files (can be converted to binary .bw files using bedgraphToBigWig from UCSC, for easy viewing in most genome browsers). ".bg.bed" files may be deleted following execution of these commands. The respective "ChromInfo.txt" files can be found in the iGenomes compilations.
###Code
%%bash
ASSEMBLY="GRCm38_ens"
bamToBed -bed12 -i accepted_hits.bam | bed12_to_bedpoint_mammal.awk |\
tee >(genomeCoverageBed -bg -i stdin -g ChromInfo.txt -strand + > mES_fwd.bedgraph) \
>(genomeCoverageBed -bg -i stdin -g ChromInfo.txt -strand - > mES_rev.bedgraph) \
>/dev/null
awk 'BEGIN{FS="\t";OFS="\t"}{print $1,$2,$3,".",$4,"+"}' mES_fwd.bedgraph > mES_fwd.bg.bed
awk 'BEGIN{FS="\t";OFS="\t"}{print $1,$2,$3,".",-$4,"-"}' mES_rev.bedgraph > mES_rev.bg.bed
cat mES_fwd.bg.bed mES_rev.bg.bed | sort -k1,1 -k2,2n > mES.bg.bed
intersectBed -wa -wb -s -split -a $ASSEMBLY_genes_canonical.bed -b mES_rev.bg.bed | \
awk 'BEGIN{FS="\t"; OFS="\t"}\
{if ($4==curr) print $14,$15,$17;\
else {print $1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12; print $14,$15,$17; curr=$4}}'\
> mES_canonical.in
###Output
_____no_output_____
###Markdown
The following python script integrates mRNA expression data from cufflinks (genes.fpkm_tracking files from the cufflinks output), ribosome profiling data from ".in" files, and sequence data (from genes_canonical.fasta, derived from genes_canonical.bed; to define all ORFs).gene_canonical.fasta file can be generated from genes_canonical.bed either by uploading the .bed file to UCSC and downloading the fasta, or by using Bedtools getfasta.Data was organized in a tab-separated custom ASCII file format (.trpedf) for subsequent processing.(N.B. trpedf ~ **t**ranscript **r**ibosome **p**rofile **e**xtended, **D**ata**F**rame compatible)**Column****Description**TranscriptTranscript IDGeneGene IDGene_NameGene NameGene_Expression_FPKMExpression at gene level (from corresponding RNA-seq data; Tophat + Cufflinks)ORF_startsORF starts (comma-separated values in transcript coordinates, 0-based)ORF_endsORF ends (as above)RPF_csvProfileRibosome profiling reads at nucleotide resolution in transcript coordinates, for length of transcript, comma-separated valuesCDSAnnotated CDS".in" files may be deleted following execution of these commands.
###Code
ASSEMBLY = "GRCm38_ens"
stage = "mES"
from Bio import SeqIO
from ast import literal_eval
import hmm_for_RPF_Seq as h
def read_genes_tracking_file(tracking_file, stages):
expected_line_length = 9 + 4 * len(stages)
ensg_expression = {stage:{} for stage in stages}
ensg_name = {}
with open(tracking_file, "r+") as f:
for line in f:
entry = line.strip().split("\t")
if entry[0] == "tracking_id" or len(entry) != expected_line_length: continue
for i, stage in enumerate(stages):
expression = 4 * i + 9
status = 4 * i + 12
if entry[status] != "OK": continue
ensg_expression[stage][entry[0]] = float(entry[(expression)])
ensg_name[entry[0]] = entry[4]
return ensg_expression, ensg_name
def read_Gtp_file(Gtp_file):
transcript_to_gene = {}
gene_to_transcript = {}
with open(Gtp_file, "r+") as f:
for line in f:
entry = line.strip().split("\t")
transcript_to_gene[entry[1]] = entry[0]
gene_to_transcript.setdefault(entry[0], []).append(entry[1])
return transcript_to_gene, gene_to_transcript
def csv(list):
return ",".join(map(str, list))
def tsv_line(*list):
return "\t".join(map(str, list)) + "\n"
def ORF_start_end(seq):
ORF_list = []
seq_len = len(seq)
for frame in xrange(3):
trans = str(seq[frame:].translate(1))
trans_len = len(trans)
aa_start, aa_end = [0 for i in xrange(2)]
while aa_start < trans_len:
aa_start = trans.find("M", aa_start)
if aa_start == -1:
break
aa_end = trans.find("*", aa_start)
ORF_start = frame + aa_start * 3
ORF_end = frame + aa_end * 3 + 3
if aa_end == -1:
ORF_end = seq_len
ORF_list.append((ORF_start, ORF_end))
aa_start = aa_start + 1
return zip(*tuple(sorted(ORF_list)))
def transcript_position(exons, c_intron_lengths, genomic_pos):
for exon, c_intron_length in zip(exons, c_intron_lengths):
if genomic_pos >= exon[0] and genomic_pos <= exon[1]:
return genomic_pos - exons[0][0] - c_intron_length
return
def parse_in_file(f, prev_entry_pos):
f.seek(prev_entry_pos)
while 1:
line = f.readline()
entry = line.split()
if len(entry) != 3:
try:
if thick_start == thick_end: transcript_CDS = [0,0]
else:
# assume strand is "+" first
transcript_CDS = [transcript_position(exons, c_intron_lengths, thick_start),
transcript_position(exons, c_intron_lengths, thick_end)]
if strand == "-":
transcript_bedgraph.reverse()
transcript_CDS[0], transcript_CDS[1] = (transcript_length - transcript_CDS[1],
transcript_length - transcript_CDS[0])
return transcript_ID, transcript_CDS, transcript_bedgraph, prev_entry_pos
except UnboundLocalError:
pass
transcript_ID, strand = (entry[3], entry[5])
transcript_start, thick_start, thick_end, block_count = map(int, (entry[1], entry[6], entry[7], entry[9]))
block_sizes = literal_eval(entry[10])
genome_block_starts = literal_eval(entry[11])
transcript_length = sum(block_sizes)
transcript_bedgraph = [0] * transcript_length
#introns and exons below in *GENOMIC* coordinates (i.e. not strand-specific)
intron_lengths = [(genome_block_starts[i+1]-genome_block_starts[i]-block_sizes[i]) for i in xrange(block_count-1)]
c_intron_lengths = [sum(intron_lengths[:i]) for i in xrange(block_count)]
exons = [[i[0], sum(i)] for i in zip(genome_block_starts, block_sizes)]
prev_entry = entry
else:
prev_entry_pos = f.tell()
transcript_pos = transcript_position(exons, c_intron_lengths, int(entry[1]))
if transcript_pos != None:
for i in xrange(int(entry[1])-int(entry[0])):
if transcript_pos + i < transcript_length:
transcript_bedgraph[transcript_pos + i] = abs(int(entry[2]))
#%% Files, Stages
ensg_expression, ensg_name = read_genes_tracking_file(DATA_DIR + stage + "_genes.fpkm_tracking", stages)
seqs = SeqIO.index(ANNOTATIONS_DIR + ASSEMBLY + "_genes_canonical.fasta", "fasta")
enst_to_ensg, ensg_to_enst = read_Gtp_file(ANNOTATIONS_DIR + ASSEMBLY + "Gtp")
in_file = stage + ".in"
trpedf_file = DATA_DIR + stage + "_canonical.trpedf"
#%% DEFINE ORFs in seqs
ORF_starts_ends = {}
for seq in seqs:
ORF_starts_ends[seq] = ORF_start_end(seqs[seq].seq)
with open(in_file, 'rb+') as f, open(trpedf_file, 'w+') as out:
out.write(tsv_line("Transcript", "Gene", "Gene_Name", "Gene_Expression_FPKM",
"ORF_starts", "ORF_ends", "RPF_csvProfile", "CDS"))
prev_entry_pos = 0
while 1:
try:
ID,transcript_CDS, transcript_bedgraph, prev_entry_pos = parse_in_file(f, prev_entry_pos)
except IndexError:
break
try:
expression = ensg_expression[stage][enst_to_ensg[ID]]
except KeyError:
continue
try:
name = ensg_name[enst_to_ensg[ID]]
except KeyError:
name = enst_to_ensg[ID]
if len(ORF_starts_ends[ID]) == 0: continue
out.write(tsv_line(ID, enst_to_ensg[ID], name, expression,
csv(ORF_starts_ends[ID][0]),
csv(ORF_starts_ends[ID][1]),
csv(transcript_bedgraph),
csv(transcript_CDS)))
###Output
_____no_output_____ |
nepremicnine.ipynb | ###Markdown
Analiza trga nepremičninV nalogi analiziramo trg nepremičnin na podlagi oglasov zajetih iz največjega slovenskega nepremičninskega portala [nepremicnine.net](nepremicnine.net). Predvsem nas bo zanimalo, kako različni dejavniki (lokacija, starost, ...) vplivajo na ceno nepremičnine.Odgovorili bomo na sledeča vprašanja in komentirali hipoteze, ki smo si jih zadali pred začetkom dela:* kako lokacija vpliva na ceno nepremičnine?* kako velikost vpliva na ceno na kvadratni meter?* kako starost nepremičnine vpliva na ceno?* cene nepremičnin na kvadratni meter so v Ljubljani bistveno višje kot drugod,in še na kakšno več, ki se nam je porodilo med raziskovanjem.Iz oglasov smo zajeli sledeče podatke:* id oglasa* regija* ime oglasa* vrsta nepremičnine (stanovanje, hiša, posest, ...)* tip nepremičnine (podrobnejša razčlenitev vrste - garsonjera, kmetijsko zemljišče, ...)* velikost zemljišča* velikost nepremičnine* cena* agencijaNajprej smo si uvozili podatke in vsa potrebna orodja za delo.
###Code
import pandas as pd
import os.path
import matplotlib.pyplot as plt
%matplotlib inline
nepr_file = os.path.join('podatki/obdelani_podatki', 'nepremicnine_1.csv')
nepr_z_dvojniki = pd.read_csv(nepr_file)
###Output
_____no_output_____
###Markdown
Oglejmo si, kako je v sledeči analizi razdeljena Slovenija, saj je to za razumevanje ključnega pomena. Podatki so razvrščeni po (sedaj zelo aktualnih) statističnih regijah, le Osrednjeslovenska statistična regija je razdeljena na *ljubljana mesto* (označeno z rdečo) in *ljubljana okolica*. Ta delitev je, kot bomo videli, zelo smiselna saj se nepremičninska trga precej razlikujeta. Pregled podatkovPreden smo začeli z delom, smo opazili, da se nekateri oglasi ponovijo v več regijah. Vzorca, kako se ponavljajo nismo opazili in smo jih kar izpustili (s tem si nismo pokvarili podatkov, saj je takih oglasov malo).
###Code
pd.concat(g for _, g in nepr_z_dvojniki.groupby("id") if len(g) > 1)
###Output
_____no_output_____
###Markdown
Spodaj odstranimo dvojnike, dodamo pa stolpce, ki nam bodo v sledeči analizi koristili: *cena_m2*, ki predstavlja ceno na kvadratni meter, *desetletje* za primerjavo starosti nepremičnin in navzdol na 100 m2 zaokroženo velikost.
###Code
nepr_brez_dvojnikov = nepr_z_dvojniki.drop_duplicates('id')
nepr = nepr_brez_dvojnikov.set_index('id')
nepr['cena_m2'] = nepr['cena'] / nepr['velikost']
nepr['desetletje'] = (nepr['leto'] // 10) * 10
nepr['zaokrozena_velikost'] = (nepr['velikost'] // 100) * 100
###Output
_____no_output_____
###Markdown
Spodaj so prikazane vse vrste nepremičnin, ki so bile oktobra na voljo. Vidimo, da prevladujejo stanovanja in hiše.
###Code
vrste = nepr.groupby('vrsta_nepremicnine')
vrste.size()
###Output
_____no_output_____
###Markdown
Kot vidimo, je velik del nepremičnin *posesti* - v glavnem gre tu za kmetijska in zazidljiva zemljišča, običajno velike parcele z relativno nizko ceno. Kot take jih je težko primerjati z ostalimi. Njihovi analizi se bomo posvetili kasneje. Pripravimo si tabelo nepremičnin, ki jih ne vključuje.
###Code
brez_posesti = nepr[nepr.vrsta_nepremicnine != "Posest"]
###Output
_____no_output_____
###Markdown
Cena na kvadratni meter po regijah Prikazan je stolpični diagram povprečnih cen na kvadratni meter po posamezni statistični regiji. Po pričakovanjih, je povprečna cena v Ljubljani precej višja kot drugod. Sledi ji južna Primorska in tako zaželjena stanovanja ob morju, ostale regije pa si sledijo v relativno ozkem intervalu.
###Code
po_regijah = brez_posesti.groupby('regija').mean('cena_m2').sort_values('cena_m2', ascending = False)[['cena_m2']]
gr1 = po_regijah.cena_m2.plot.bar()
gr1.set_title("Povprečna cena po regijah")
gr1.set_xlabel('Regija')
gr1.set_ylabel('Cena na m2')
###Output
_____no_output_____
###Markdown
Vpliv starosti nepremičnine na cenoZanimalo nas bo, kako starost nepremičnine vpliva na njeno ceno - torej kdaj je bila zgrajena. Vidimo, da je nepremičnina v Ljubljani dražja od ostalih, ne glede na njeno starost. Pričakovali smo, da bodo cene stavb iz prve polovice 20. stoletja nižje zaradi slabše potresne gradnje - kot vemo, se je nadzor v Jugoslaviji močno poostril po potresu v Skopju leta 1963 - vendar očitnejšega trenda v tej smeri ni opaziti.V skoraj vseh regijah pa so cene najnovejših zgradb višje (zopet je ta razlika najočitnejša v Ljubljani).Na spodnjem grafu je prikazano spreminjanje cene, glede na leto izgradnje po posamezni regiji.
###Code
grupiran = brez_posesti[(brez_posesti.desetletje >= 1900) & (brez_posesti.cena_m2 <= 80000)][['regija','cena_m2','desetletje']]
gr2 = grupiran.groupby(['regija','desetletje'])["cena_m2"].mean().unstack(level = 0).plot()
gr2.legend(loc='center left', bbox_to_anchor=(1.0, 0.5))
gr2.set_ylabel('Cena na m2')
gr2.set_xlabel('Desetletje izgradnje')
gr2.set_title('Cena po regijah glede na desetletje izgradnje')
###Output
_____no_output_____
###Markdown
Cene po vrsti nepremičninOglejmo si, kako vrsta nepremičnine vpliva na ceno. Tu jemljemo povprečja po vseh regijah in vseh starostih. Vidimo, da je stanovanje povprečno najdražje, presenetljivo pa je nižja povprečna cena hiše. To si lahko razlagamo s tem, da je veliko hiš na prodaj v regijah izven Ljubljane, kjer je cena občutno nižja, stanovanj pa je v Ljubljani na prodaj več. To se precej jasno vidi v spodnjem grafu.
###Code
st_po_regijah = brez_posesti.groupby(['regija','vrsta_nepremicnine'])
st_po_regijah_graf = st_po_regijah.size().unstack(level = 1).plot.bar()
st_po_regijah_graf.set_ylabel('Število')
st_po_regijah_graf.set_xlabel('Regija')
st_po_regijah_graf.legend()
st_po_regijah_graf.set_title('Število vrst nepremičnin po regijah')
gr3 = brez_posesti.groupby('vrsta_nepremicnine').mean('cena_m2').sort_values('cena_m2',ascending = False).cena_m2.plot.bar()
gr3.set_ylabel('Cena na m2')
gr3.set_xlabel('Vrsta nepremičnine')
gr3.set_title('Cena po vrsti nepremičnine')
###Output
_____no_output_____
###Markdown
Spodaj je vsaka od zgornjih kategorij razdeljena na manjše dele. Zopet preseneča cena najmanjše stanovanjske enote - garsonjere, kar pa lahko razlagamo z njihovo koncentracijo v središču Ljubljane. Visoko na lestvici so tudi pisarne.
###Code
gr4 = brez_posesti.groupby('tip_nepremicnine').mean('cena_m2').sort_values('cena_m2',ascending = False).cena_m2.plot.bar()
gr4.set_ylabel('Cena na m2')
gr4.set_xlabel('Tip nepremičnine')
gr4.set_title('Cena po tipu nepremičnine')
###Output
_____no_output_____
###Markdown
Vpliv velikosti na cenoPodrobneje si oglejmo še, kako velikost (zaokrožena na 100 kvadratnih metrov) vpliva na ceno na kvadratni meter. Kot smo videli zgoraj so garsonjere povprečno najdražje, kar se precej jasno odraža tudi na spodnjem grafu. Pri večjih nepremičninah jasnih trendov ni - verjetno tudi zaradi relativno manjšega vzorca.
###Code
gr5 = brez_posesti[brez_posesti.zaokrozena_velikost < 10000].groupby('zaokrozena_velikost').mean('cena_m2').cena_m2.plot()
gr5.set_ylabel('Cena na m2')
gr5.set_xlabel("Navzdol na 100m2 zaokrožena velikost")
gr5.set_title('Vpliv velikosti nepremičnine na njeno ceno')
###Output
_____no_output_____
###Markdown
Nezazidane posestiZa konec preglejmo še prej izpuščene posesti. Najprej opazimo, da so cene pogosto navedene narobe - za primer si oglejmo primer tega zapisa: Cene posesti so, iz meni neznanega razloga, navedene kar direktno v ceni na kvadratni meter (ne tako kot ostale nepremičnine, kjer je bilo to ceno še potrebno izračunati). Pri tem pa se nekajkrat pojavi težava v zapisu decimalnih številk, saj računalnik primer zgoraj prebere kot 208500€ na kvadratni meter, cena celotne posesti pa bi tako bila skoraj 500 000 000€ - očitna napaka. Na srečo pa takih primerov ni prav dosti in se nam vzorec zato ne bo pokvaril.Zato si bomo cene navzgor omejili s 1000€ na kvadratni meter, saj se s tem izognemo takim izjemam.
###Code
posesti = nepr[(nepr.vrsta_nepremicnine == "Posest") & (nepr.cena_m2 < 1000)]
posesti
###Output
_____no_output_____
###Markdown
Podobno kot zgoraj, so tudi v tem razdelku cene v regiji *ljubljana mesto* daleč najvišje - tu gre v večini sicer za zazidljive in ne kmetijske parcele. Prikazana je cena na kvadratni meter, pa tudi število posameznih posesti na prodaj po regijah.
###Code
posesti_po_regijah = posesti.groupby('regija')
gr6 = posesti_po_regijah.mean('cena_m2').sort_values('cena_m2', ascending = False).cena_m2.plot.bar()
gr6.set_title('Cena posesti po regijah')
gr6.set_xlabel('Regija')
gr6.set_ylabel('Cena na m2')
posesti_po_regijah.size()
###Output
_____no_output_____
###Markdown
Kot vidimo imajo najvišjo ceno zemljišča *Za investicijo* - prazne parcele, ki pa imajo že urejeno neko osnovno dokumentacijo. Morda presenetljivo, saj je v *Kmetija* vključena tudi hiša. Daleč najnižjo ceno imajo *Kmetijska zemljišča*, torej parcele namenjene kmetovanju brez zgrajene ifrastrukture.Iz spodnjega grafa se tudi vidi bistveno nižje cene posesti od ostalih nepremičnin.
###Code
posesti_po_tipu = posesti.groupby('tip_nepremicnine')
gr7 = posesti_po_tipu.mean('cena_m2').sort_values('cena_m2', ascending = False).cena_m2.plot.bar()
gr7.set_title('Cena posesti po namenu')
gr7.set_xlabel('Tip posesti')
gr7.set_ylabel('Cena na m2')
###Output
_____no_output_____
###Markdown
So cene skozi agencije višje?Za konec si poglejmo še, če so cene nepremičnin, ki se prodajajo skozi agencije kaj drugačne od zasebnih ponudb.
###Code
delo_z_agencijami = brez_posesti[['cena_m2','agencija']]
zasebniki = delo_z_agencijami[delo_z_agencijami.agencija == "Zasebna ponudba"]
agencije = delo_z_agencijami[delo_z_agencijami.agencija != "Zasebna ponudba"]
z = zasebniki[['cena_m2']].mean()
a = agencije[['cena_m2']].mean()
print("Povprečna cena zasebnikov: ",z)
print("Povprečna cena agencij: ", a)
###Output
Povprečna cena zasebnikov: cena_m2 1716.404217
dtype: float64
Povprečna cena agencij: cena_m2 1723.108856
dtype: float64
|
Transfer Learning/ClassifyFlowers_DL (TransferLearning_InceptionV3).ipynb | ###Markdown
Libraries
###Code
### Uncomment the next two lines to,
### install tensorflow_hub and tensorflow datasets
#!pip install tensorflow_hub
#!pip install tensorflow_datasets
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import tensorflow_hub as hub
import tensorflow_datasets as tfds
from tensorflow.keras import layers
###Output
_____no_output_____
###Markdown
Download and Split data into Train and Validation
###Code
def get_data():
(train_set, validation_set), info = tfds.load(
'tf_flowers',
with_info=True,
as_supervised=True,
split=['train[:70%]', 'train[70%:]'],
)
return train_set, validation_set, info
train_set, validation_set, info = get_data()
num_examples = info.splits['train'].num_examples
num_classes = info.features['label'].num_classes
print('Total Number of Classes: {}'.format(num_classes))
print('Total Number of Training Images: {}'.format(len(train_set)))
print('Total Number of Validation Images: {} \n'.format(len(validation_set)))
img_shape = 299
batch_size = 32
def format_image(image, label):
image = tf.image.resize(image, (img_shape, img_shape))/255.0
return image, label
train_batches = train_set.shuffle(num_examples//4).map(format_image).batch(batch_size).prefetch(1)
validation_batches = validation_set.map(format_image).batch(batch_size).prefetch(1)
###Output
_____no_output_____
###Markdown
Getting Inception model learned features
###Code
def get_mobilenet_features():
URL = "https://tfhub.dev/google/tf2-preview/inception_v3/feature_vector/4"
global img_shape
feature_extractor = hub.KerasLayer(URL, input_shape=(img_shape, img_shape,3))
return feature_extractor
### Freezing the layers of transferred model (InceptionV3 Model)
feature_extractor = get_mobilenet_features()
feature_extractor.trainable = False
###Output
_____no_output_____
###Markdown
Deep Learning Model - Transfer Learning using InceptionV3
###Code
def create_transfer_learned_model(feature_extractor):
global num_classes
model = tf.keras.Sequential([
feature_extractor,
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.4),
layers.Dense(num_classes, activation='softmax')
])
model.compile(
optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
return model
###Output
_____no_output_____
###Markdown
Training the last classification layer of the model Achieved Validation Accuracy: 92.10% (significant improvement over simple architecture)
###Code
epochs = 10
model = create_transfer_learned_model(feature_extractor)
history = model.fit(train_batches,
epochs=epochs,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Plotting Accuracy and Loss Curves
###Code
def create_plots(history):
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
global epochs
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
create_plots(history)
###Output
_____no_output_____
###Markdown
Prediction
###Code
def predict():
global train_batches, info
image_batch, label_batch = next(iter(train_batches.take(1)))
image_batch = image_batch.numpy()
label_batch = label_batch.numpy()
predicted_batch = model.predict(image_batch)
predicted_batch = tf.squeeze(predicted_batch).numpy()
class_names = np.array(info.features['label'].names)
predicted_ids = np.argmax(predicted_batch, axis=-1)
predicted_class_names = class_names[predicted_ids]
return image_batch, label_batch, predicted_ids, predicted_class_names
image_batch, label_batch, predicted_ids, predicted_class_names = predict()
print("Labels: ", label_batch)
print("Predicted labels: ", predicted_ids)
def plot_figures():
global image_batch, predicted_ids, label_batch
plt.figure(figsize=(10,9))
for n in range(30):
plt.subplot(6,5,n+1)
plt.subplots_adjust(hspace = 0.3)
plt.imshow(image_batch[n])
color = "blue" if predicted_ids[n] == label_batch[n] else "red"
plt.title(predicted_class_names[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (blue: correct, red: incorrect)")
plot_figures()
###Output
_____no_output_____ |
solutions_do_not_open/Lab_23_DL Sequence Generation_solution.ipynb | ###Markdown
Sequence GenerationIn this exercise, you will design an RNN to generate baby names! You will design an RNN to learn to predict the next letter of a name given the preceding letters. This is a character-level RNN rather than a word-level RNN.This idea comes from this excellent blog post: http://karpathy.github.io/2015/05/21/rnn-effectiveness/
###Code
%matplotlib inline
import numpy as np
from keras.preprocessing import sequence
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Embedding
from keras.layers import LSTM, SimpleRNN, GRU
###Output
Using TensorFlow backend.
###Markdown
Training DataThe training data we will use comes from this corpus:http://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/areas/nlp/corpora/names/Take a look at the training data in `data/names.txt`, which includes both boy and girl names. Below we load the file and convert it to all lower-case for simplicity.Note that we also add a special "end" character (in this case a period) to allow the model to learn to predict the end of a name.
###Code
with open('../data/names.txt') as f:
names = f.readlines()
names = [name.lower().strip() + '.' for name in names]
print('Loaded %d names' % len(names))
names[:10]
###Output
_____no_output_____
###Markdown
We need to count all of the characters in our "vocabulary" and build a dictionary that translates between the character and its assigned index (and vice versa).
###Code
chars = set()
for name in names:
chars.update(name)
vocab_size = len(chars)
print('Vocabulary size:', vocab_size)
char_inds = dict((c, i) for i, c in enumerate(chars))
inds_char = dict((i, c) for i, c in enumerate(chars))
char_inds
###Output
_____no_output_____
###Markdown
Exercise 1 - translate chars to indexesMost of the work of preparing the data is taken care of, but it is important to know the steps because they will be needed anytime you want to train an RNN. Use the dictionary created above to translate each example in `names` to its number format in `int_names`.
###Code
# Translate names to their number format in int_names
int_names = [[char_inds[x] for x in name] for name in names]
# for name in names:
# int_names.append()
###Output
_____no_output_____
###Markdown
The `create_matrix_from_sequences` will take the examples and create training data by cutting up names into input sequence of length `maxlen` and training labels, which are the following character. Make sure you understand this procedure because it is what will actually go into the network!
###Code
def create_matrix_from_sequences(int_names, maxlen, step=1):
name_parts = []
next_chars = []
for name in int_names:
for i in range(0, len(name) - maxlen, step):
name_parts.append(name[i: i + maxlen])
next_chars.append(name[i + maxlen])
return name_parts, next_chars
maxlen = 3
name_parts, next_chars = create_matrix_from_sequences(int_names, maxlen)
print('Created %d name segments' % len(name_parts))
X_train = sequence.pad_sequences(name_parts, maxlen=maxlen)
y_train = np_utils.to_categorical(next_chars, vocab_size)
X_train.shape
X_train[:5]
###Output
_____no_output_____
###Markdown
Exercise 2 - design a modelDesign your model below. Like before, you will need to set up the embedding layer, the recurrent layer, a dense connection and a softmax to predict the next character.Fit the model by running at least 10 epochs. Later you will generate names with the model. Getting around 30% accuracy will usually result in decent generations. What is the accuracy you would expect for random guessing?
###Code
# Design an RNN model
model = Sequential()
model.add(Embedding(vocab_size, 32, input_length=maxlen))
model.add(LSTM(32, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(vocab_size))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=32, epochs=10, verbose=1)
###Output
Epoch 1/10
32016/32016 [==============================] - 32s 993us/step - loss: 2.4909 - acc: 0.2643
Epoch 2/10
32016/32016 [==============================] - 30s 934us/step - loss: 2.2727 - acc: 0.2901
Epoch 3/10
32016/32016 [==============================] - 29s 904us/step - loss: 2.1985 - acc: 0.3024
Epoch 4/10
32016/32016 [==============================] - 29s 906us/step - loss: 2.1440 - acc: 0.3165
Epoch 5/10
32016/32016 [==============================] - 28s 869us/step - loss: 2.1091 - acc: 0.3228
Epoch 6/10
32016/32016 [==============================] - 22s 674us/step - loss: 2.0836 - acc: 0.3330
Epoch 7/10
32016/32016 [==============================] - 14s 426us/step - loss: 2.0616 - acc: 0.3397
Epoch 8/10
32016/32016 [==============================] - 27s 853us/step - loss: 2.0286 - acc: 0.3489
Epoch 10/10
32016/32016 [==============================] - 27s 838us/step - loss: 2.0136 - acc: 0.3522
###Markdown
Sampling from the modelWe can sample the model by feeding in a few letters and using the model's prediction for the next letter. Then we feed the model's prediction back in to get the next letter, etc.The `sample` function is a helper to allow you to adjust the diversity of the samples. You can read more [here](https://en.wikipedia.org/wiki/Softmax_functionReinforcement_learning).Read the `gen_name` function to understand how the model is sampled.
###Code
def sample(p, diversity=1.0):
p1 = np.asarray(p).astype('float64')
p1 = np.log(p1) / diversity
e_p1 = np.exp(p1)
s = np.sum(e_p1)
p1 = e_p1 / s
return np.argmax(np.random.multinomial(1, p1, 1))
def gen_name(seed, length=1, diversity=1.0, maxlen=3):
"""
seed - the start of the name to sample
length - the number of letters to sample; if None then samples
are generated until the model generates a '.' character
diversity - a knob to increase or decrease the randomness of the
samples; higher = more random, lower = closer to the model's
prediction
maxlen - the size of the model's input
"""
# Prepare input array
x = np.zeros((1, maxlen), dtype=int)
# Generate samples
out = seed
while length is None or len(out) < len(seed) + length:
# Add the last chars so far for the next input
for i, c in enumerate(out[-maxlen:]):
x[0, i] = char_inds[c]
# Get softmax for next character
preds = model.predict(x, verbose=0)[0]
# Sample the network output with diversity
c = sample(preds, diversity)
# Choose to end if the model generated an end token
if c == char_inds['.']:
if length is None:
return out
else:
continue
# Build up output
out += inds_char[c]
return out
###Output
_____no_output_____
###Markdown
Exercise 3 - sample the modelUse the `gen_name` function above to sample some names from your model.1. Try generating a few characters by setting the `length` argument.2. Try different diversities. Start with 1.0 and vary it up and down.3. Try using `length=None`, allowing the model to choose when to end a name.4. What happens when `length=None` and the diversity is high? How do samples change in this case staring from beginning to end? Why do you think this is?5. With `length=None` and a "good" diversity, can you tell if the model has learned a repertoire of "endings"? What are some of them? 6. Find some good names. What are you favorites? :D
###Code
gen_name('', length=10, diversity=1.0)
###Output
_____no_output_____
###Markdown
Sequence GenerationIn this exercise, you will design an RNN to generate baby names! You will design an RNN to learn to predict the next letter of a name given the preceding letters. This is a character-level RNN rather than a word-level RNN.This idea comes from this excellent blog post: http://karpathy.github.io/2015/05/21/rnn-effectiveness/
###Code
%matplotlib inline
import numpy as np
from keras.preprocessing import sequence
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Embedding
from keras.layers import LSTM, SimpleRNN, GRU
###Output
_____no_output_____
###Markdown
Training DataThe training data we will use comes from this corpus:http://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/areas/nlp/corpora/names/Take a look at the training data in `data/names.txt`, which includes both boy and girl names. Below we load the file and convert it to all lower-case for simplicity.Note that we also add a special "end" character (in this case a period) to allow the model to learn to predict the end of a name.
###Code
with open('../data/names.txt') as f:
names = f.readlines()
names = [name.lower().strip() + '.' for name in names]
print('Loaded %d names' % len(names))
names[:10]
###Output
_____no_output_____
###Markdown
We need to count all of the characters in our "vocabulary" and build a dictionary that translates between the character and its assigned index (and vice versa).
###Code
chars = set()
for name in names:
chars.update(name)
vocab_size = len(chars)
print('Vocabulary size:', vocab_size)
char_inds = dict((c, i) for i, c in enumerate(chars))
inds_char = dict((i, c) for i, c in enumerate(chars))
char_inds
###Output
_____no_output_____
###Markdown
Exercise 1 - translate chars to indexesMost of the work of preparing the data is taken care of, but it is important to know the steps because they will be needed anytime you want to train an RNN. Use the dictionary created above to translate each example in `names` to its number format in `int_names`.
###Code
# Translate names to their number format in int_names
int_names = [[char_inds[x] for x in name] for name in names]
# for name in names:
# int_names.append()
###Output
_____no_output_____
###Markdown
The `create_matrix_from_sequences` will take the examples and create training data by cutting up names into input sequence of length `maxlen` and training labels, which are the following character. Make sure you understand this procedure because it is what will actually go into the network!
###Code
def create_matrix_from_sequences(int_names, maxlen, step=1):
name_parts = []
next_chars = []
for name in int_names:
for i in range(0, len(name) - maxlen, step):
name_parts.append(name[i: i + maxlen])
next_chars.append(name[i + maxlen])
return name_parts, next_chars
maxlen = 3
name_parts, next_chars = create_matrix_from_sequences(int_names, maxlen)
print('Created %d name segments' % len(name_parts))
X_train = sequence.pad_sequences(name_parts, maxlen=maxlen)
y_train = np_utils.to_categorical(next_chars, vocab_size)
X_train.shape
X_train[:5]
###Output
_____no_output_____
###Markdown
Exercise 2 - design a modelDesign your model below. Like before, you will need to set up the embedding layer, the recurrent layer, a dense connection and a softmax to predict the next character.Fit the model by running at least 10 epochs. Later you will generate names with the model. Getting around 30% accuracy will usually result in decent generations. What is the accuracy you would expect for random guessing?
###Code
# Design an RNN model
model = Sequential()
model.add(Embedding(vocab_size, 32, input_length=maxlen))
model.add(LSTM(32, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(vocab_size))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=32, epochs=10, verbose=1)
###Output
_____no_output_____
###Markdown
Sampling from the modelWe can sample the model by feeding in a few letters and using the model's prediction for the next letter. Then we feed the model's prediction back in to get the next letter, etc.The `sample` function is a helper to allow you to adjust the diversity of the samples. You can read more [here](https://en.wikipedia.org/wiki/Softmax_functionReinforcement_learning).Read the `gen_name` function to understand how the model is sampled.
###Code
def sample(p, diversity=1.0):
p1 = np.asarray(p).astype('float64')
p1 = np.log(p1) / diversity
e_p1 = np.exp(p1)
s = np.sum(e_p1)
p1 = e_p1 / s
return np.argmax(np.random.multinomial(1, p1, 1))
def gen_name(seed, length=1, diversity=1.0, maxlen=3):
"""
seed - the start of the name to sample
length - the number of letters to sample; if None then samples
are generated until the model generates a '.' character
diversity - a knob to increase or decrease the randomness of the
samples; higher = more random, lower = closer to the model's
prediction
maxlen - the size of the model's input
"""
# Prepare input array
x = np.zeros((1, maxlen), dtype=int)
# Generate samples
out = seed
while length is None or len(out) < len(seed) + length:
# Add the last chars so far for the next input
for i, c in enumerate(out[-maxlen:]):
x[0, i] = char_inds[c]
# Get softmax for next character
preds = model.predict(x, verbose=0)[0]
# Sample the network output with diversity
c = sample(preds, diversity)
# Choose to end if the model generated an end token
if c == char_inds['.']:
if length is None:
return out
else:
continue
# Build up output
out += inds_char[c]
return out
###Output
_____no_output_____
###Markdown
Exercise 3 - sample the modelUse the `gen_name` function above to sample some names from your model.1. Try generating a few characters by setting the `length` argument.2. Try different diversities. Start with 1.0 and vary it up and down.3. Try using `length=None`, allowing the model to choose when to end a name.4. What happens when `length=None` and the diversity is high? How do samples change in this case staring from beginning to end? Why do you think this is?5. With `length=None` and a "good" diversity, can you tell if the model has learned a repertoire of "endings"? What are some of them? 6. Find some good names. What are you favorites? :D
###Code
gen_name('', length=10, diversity=1.0)
###Output
_____no_output_____
###Markdown
Sequence GenerationIn this exercise, you will design an RNN to generate baby names! You will design an RNN to learn to predict the next letter of a name given the preceding letters. This is a character-level RNN rather than a word-level RNN.This idea comes from this excellent blog post: http://karpathy.github.io/2015/05/21/rnn-effectiveness/
###Code
%matplotlib inline
import numpy as np
from keras.preprocessing import sequence
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Embedding
from keras.layers import LSTM, SimpleRNN, GRU
###Output
_____no_output_____
###Markdown
Training DataThe training data we will use comes from this corpus:http://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/areas/nlp/corpora/names/Take a look at the training data in `data/names.txt`, which includes both boy and girl names. Below we load the file and convert it to all lower-case for simplicity.Note that we also add a special "end" character (in this case a period) to allow the model to learn to predict the end of a name.
###Code
with open('../data/names.txt') as f:
names = f.readlines()
names = [name.lower().strip() + '.' for name in names]
print('Loaded %d names' % len(names))
names[:10]
###Output
_____no_output_____
###Markdown
We need to count all of the characters in our "vocabulary" and build a dictionary that translates between the character and its assigned index (and vice versa).
###Code
chars = set()
for name in names:
chars.update(name)
vocab_size = len(chars)
print('Vocabulary size:', vocab_size)
char_inds = dict((c, i) for i, c in enumerate(chars))
inds_char = dict((i, c) for i, c in enumerate(chars))
char_inds
###Output
_____no_output_____
###Markdown
Exercise 1 - translate chars to indexesMost of the work of preparing the data is taken care of, but it is important to know the steps because they will be needed anytime you want to train an RNN. Use the dictionary created above to translate each example in `names` to its number format in `int_names`.
###Code
# Translate names to their number format in int_names
int_names = [[char_inds[x] for x in name] for name in names]
# for name in names:
# int_names.append()
###Output
_____no_output_____
###Markdown
The `create_matrix_from_sequences` will take the examples and create training data by cutting up names into input sequence of length `maxlen` and training labels, which are the following character. Make sure you understand this procedure because it is what will actually go into the network!
###Code
def create_matrix_from_sequences(int_names, maxlen, step=1):
name_parts = []
next_chars = []
for name in int_names:
for i in range(0, len(name) - maxlen, step):
name_parts.append(name[i: i + maxlen])
next_chars.append(name[i + maxlen])
return name_parts, next_chars
maxlen = 3
name_parts, next_chars = create_matrix_from_sequences(int_names, maxlen)
print('Created %d name segments' % len(name_parts))
X_train = sequence.pad_sequences(name_parts, maxlen=maxlen)
y_train = np_utils.to_categorical(next_chars, vocab_size)
X_train.shape
X_train[:5]
###Output
_____no_output_____
###Markdown
Exercise 2 - design a modelDesign your model below. Like before, you will need to set up the embedding layer, the recurrent layer, a dense connection and a softmax to predict the next character.Fit the model by running at least 10 epochs. Later you will generate names with the model. Getting around 30% accuracy will usually result in decent generations. What is the accuracy you would expect for random guessing?
###Code
# Design an RNN model
model = Sequential()
model.add(Embedding(vocab_size, 32, input_length=maxlen))
model.add(LSTM(32, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(vocab_size))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=32, epochs=10, verbose=1)
###Output
_____no_output_____
###Markdown
Sampling from the modelWe can sample the model by feeding in a few letters and using the model's prediction for the next letter. Then we feed the model's prediction back in to get the next letter, etc.The `sample` function is a helper to allow you to adjust the diversity of the samples. You can read more [here](https://en.wikipedia.org/wiki/Softmax_functionReinforcement_learning).Read the `gen_name` function to understand how the model is sampled.
###Code
def sample(p, diversity=1.0):
p1 = np.asarray(p).astype('float64')
p1 = np.log(p1) / diversity
e_p1 = np.exp(p1)
s = np.sum(e_p1)
p1 = e_p1 / s
return np.argmax(np.random.multinomial(1, p1, 1))
def gen_name(seed, length=1, diversity=1.0, maxlen=3):
"""
seed - the start of the name to sample
length - the number of letters to sample; if None then samples
are generated until the model generates a '.' character
diversity - a knob to increase or decrease the randomness of the
samples; higher = more random, lower = closer to the model's
prediction
maxlen - the size of the model's input
"""
# Prepare input array
x = np.zeros((1, maxlen), dtype=int)
# Generate samples
out = seed
while length is None or len(out) < len(seed) + length:
# Add the last chars so far for the next input
for i, c in enumerate(out[-maxlen:]):
x[0, i] = char_inds[c]
# Get softmax for next character
preds = model.predict(x, verbose=0)[0]
# Sample the network output with diversity
c = sample(preds, diversity)
# Choose to end if the model generated an end token
if c == char_inds['.']:
if length is None:
return out
else:
continue
# Build up output
out += inds_char[c]
return out
###Output
_____no_output_____
###Markdown
Exercise 3 - sample the modelUse the `gen_name` function above to sample some names from your model.1. Try generating a few characters by setting the `length` argument.2. Try different diversities. Start with 1.0 and vary it up and down.3. Try using `length=None`, allowing the model to choose when to end a name.4. What happens when `length=None` and the diversity is high? How do samples change in this case staring from beginning to end? Why do you think this is?5. With `length=None` and a "good" diversity, can you tell if the model has learned a repertoire of "endings"? What are some of them? 6. Find some good names. What are you favorites? :D
###Code
gen_name('', length=10, diversity=1.0)
###Output
_____no_output_____
###Markdown
Sequence GenerationIn this exercise, you will design an RNN to generate baby names! You will design an RNN to learn to predict the next letter of a name given the preceding letters. This is a character-level RNN rather than a word-level RNN.This idea comes from this excellent blog post: http://karpathy.github.io/2015/05/21/rnn-effectiveness/
###Code
%matplotlib inline
import numpy as np
from tensorflow.keras.preprocessing import sequence
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation, Embedding
from tensorflow.keras.layers import LSTM, SimpleRNN, GRU
###Output
_____no_output_____
###Markdown
Training DataThe training data we will use comes from this corpus:http://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/areas/nlp/corpora/names/Take a look at the training data in `data/names.txt`, which includes both boy and girl names. Below we load the file and convert it to all lower-case for simplicity.Note that we also add a special "end" character (in this case a period) to allow the model to learn to predict the end of a name.
###Code
with open('../data/names.txt') as f:
names = f.readlines()
names = [name.lower().strip() + '.' for name in names]
print('Loaded %d names' % len(names))
names[:10]
###Output
_____no_output_____
###Markdown
We need to count all of the characters in our "vocabulary" and build a dictionary that translates between the character and its assigned index (and vice versa).
###Code
chars = set()
for name in names:
chars.update(name)
vocab_size = len(chars)
print('Vocabulary size:', vocab_size)
char_inds = dict((c, i) for i, c in enumerate(chars))
inds_char = dict((i, c) for i, c in enumerate(chars))
char_inds
###Output
_____no_output_____
###Markdown
Exercise 1 - translate chars to indexesMost of the work of preparing the data is taken care of, but it is important to know the steps because they will be needed anytime you want to train an RNN. Use the dictionary created above to translate each example in `names` to its number format in `int_names`.
###Code
# Translate names to their number format in int_names
int_names = [[char_inds[x] for x in name] for name in names]
# for name in names:
# int_names.append()
###Output
_____no_output_____
###Markdown
The `create_matrix_from_sequences` will take the examples and create training data by cutting up names into input sequence of length `maxlen` and training labels, which are the following character. Make sure you understand this procedure because it is what will actually go into the network!
###Code
def create_matrix_from_sequences(int_names, maxlen, step=1):
name_parts = []
next_chars = []
for name in int_names:
for i in range(0, len(name) - maxlen, step):
name_parts.append(name[i: i + maxlen])
next_chars.append(name[i + maxlen])
return name_parts, next_chars
maxlen = 3
name_parts, next_chars = create_matrix_from_sequences(int_names, maxlen)
print('Created %d name segments' % len(name_parts))
X_train = sequence.pad_sequences(name_parts, maxlen=maxlen)
y_train = to_categorical(next_chars, vocab_size)
X_train.shape
X_train[:5]
###Output
_____no_output_____
###Markdown
Exercise 2 - design a modelDesign your model below. Like before, you will need to set up the embedding layer, the recurrent layer, a dense connection and a softmax to predict the next character.Fit the model by running at least 10 epochs. Later you will generate names with the model. Getting around 30% accuracy will usually result in decent generations. What is the accuracy you would expect for random guessing?
###Code
# Design an RNN model
model = Sequential()
model.add(Embedding(vocab_size, 10, input_length=maxlen))
model.add(LSTM(32, dropout=0.2))
model.add(Dense(vocab_size))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=32, epochs=10, verbose=1)
###Output
_____no_output_____
###Markdown
Sampling from the modelWe can sample the model by feeding in a few letters and using the model's prediction for the next letter. Then we feed the model's prediction back in to get the next letter, etc.The `sample` function is a helper to allow you to adjust the diversity of the samples. You can read more [here](https://en.wikipedia.org/wiki/Softmax_functionReinforcement_learning).Read the `gen_name` function to understand how the model is sampled.
###Code
def sample(p, diversity=1.0):
p1 = np.asarray(p).astype('float64')
p1 = np.log(p1) / diversity
e_p1 = np.exp(p1)
s = np.sum(e_p1)
p1 = e_p1 / s
return np.argmax(np.random.multinomial(1, p1, 1))
def gen_name(seed, length=1, diversity=1.0, maxlen=3):
"""
seed - the start of the name to sample
length - the number of letters to sample; if None then samples
are generated until the model generates a '.' character
diversity - a knob to increase or decrease the randomness of the
samples; higher = more random, lower = closer to the model's
prediction
maxlen - the size of the model's input
"""
# Prepare input array
x = np.zeros((1, maxlen), dtype=int)
# Generate samples
out = seed
while length is None or len(out) < len(seed) + length:
# Add the last chars so far for the next input
for i, c in enumerate(out[-maxlen:]):
x[0, i] = char_inds[c]
# Get softmax for next character
preds = model.predict(x, verbose=0)[0]
# Sample the network output with diversity
c = sample(preds, diversity)
# Choose to end if the model generated an end token
if c == char_inds['.']:
if length is None:
return out
else:
continue
# Build up output
out += inds_char[c]
return out
###Output
_____no_output_____
###Markdown
Exercise 3 - sample the modelUse the `gen_name` function above to sample some names from your model.1. Try generating a few characters by setting the `length` argument.2. Try different diversities. Start with 1.0 and vary it up and down.3. Try using `length=None`, allowing the model to choose when to end a name.4. What happens when `length=None` and the diversity is high? How do samples change in this case staring from beginning to end? Why do you think this is?5. With `length=None` and a "good" diversity, can you tell if the model has learned a repertoire of "endings"? What are some of them? 6. Find some good names. What are you favorites? :D
###Code
gen_name('jen', length=8, diversity=1.0)
###Output
_____no_output_____ |
docs/40_tabular_data_wrangling/introduction_dataframes.ipynb | ###Markdown
Introduction to working with DataFramesIn basic python, we often use dictionaries containing our measurements as vectors. While these basic structures are handy for collecting data, they are suboptimal for further data processing. For that we introduce [panda DataFrames](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) which are more handy in the next steps. In Python, scientists often call tables "DataFrames".
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Creating DataFrames from a dictionary of listsAssume we did some image processing and have some results in available in a dictionary that contains lists of numbers:
###Code
measurements = {
"labels": [1, 2, 3],
"area": [45, 23, 68],
"minor_axis": [2, 4, 4],
"major_axis": [3, 4, 5],
}
###Output
_____no_output_____
###Markdown
This data structure can be nicely visualized using a DataFrame:
###Code
df = pd.DataFrame(measurements)
df
###Output
_____no_output_____
###Markdown
Using these DataFrames, data modification is straighforward. For example one can append a new column and compute its values from existing columns:
###Code
df["aspect_ratio"] = df["major_axis"] / df["minor_axis"]
df
###Output
_____no_output_____
###Markdown
We can also save this table for continuing to work with it.
###Code
df.to_csv("../../data/short_table.csv")
###Output
_____no_output_____
###Markdown
Creating DataFrames from lists of listsSometimes, we are confronted to data in form of lists of lists. To make pandas understand that form of data correctly, we also need to provide the headers in the same order as the lists
###Code
header = ['labels', 'area', 'minor_axis', 'major_axis']
data = [
[1, 2, 3],
[45, 23, 68],
[2, 4, 4],
[3, 4, 5],
]
# convert the data and header arrays in a pandas data frame
data_frame = pd.DataFrame(data, header)
# show it
data_frame
###Output
_____no_output_____
###Markdown
As you can see, this tabls is _rotated_. We can bring it in the usual form like this:
###Code
# rotate/flip it
data_frame = data_frame.transpose()
# show it
data_frame
###Output
_____no_output_____
###Markdown
Introduction to working with DataFramesIn basic python, we often use dictionaries containing our measurements as vectors. While these basic structures are handy for collecting data, they are suboptimal for further data processing. For that we introduce [panda DataFrames](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) which are more handy in the next steps. In Python, scientists often call tables "DataFrames".
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Creating DataFrames from a dictionary of listsAssume we did some image processing and have some results in available in a dictionary that contains lists of numbers:
###Code
measurements = {
"labels": [1, 2, 3],
"area": [45, 23, 68],
"minor_axis": [2, 4, 4],
"major_axis": [3, 4, 5],
}
###Output
_____no_output_____
###Markdown
This data structure can be nicely visualized using a DataFrame:
###Code
df = pd.DataFrame(measurements)
df
###Output
_____no_output_____
###Markdown
Using these DataFrames, data modification is straighforward. For example one can append a new column and compute its values from existing columns:
###Code
df["aspect_ratio"] = df["major_axis"] / df["minor_axis"]
df
###Output
_____no_output_____
###Markdown
Saving data framesWe can also save this table for continuing to work with it.
###Code
df.to_csv("../../data/short_table.csv")
###Output
_____no_output_____
###Markdown
Creating DataFrames from lists of listsSometimes, we are confronted to data in form of lists of lists. To make pandas understand that form of data correctly, we also need to provide the headers in the same order as the lists
###Code
header = ['labels', 'area', 'minor_axis', 'major_axis']
data = [
[1, 2, 3],
[45, 23, 68],
[2, 4, 4],
[3, 4, 5],
]
# convert the data and header arrays in a pandas data frame
data_frame = pd.DataFrame(data, header)
# show it
data_frame
###Output
_____no_output_____
###Markdown
As you can see, this tabls is _rotated_. We can bring it in the usual form like this:
###Code
# rotate/flip it
data_frame = data_frame.transpose()
# show it
data_frame
###Output
_____no_output_____
###Markdown
Loading data framesTables can also be read from CSV files.
###Code
df_csv = pd.read_csv('../../data/blobs_statistics.csv')
df_csv
###Output
_____no_output_____
###Markdown
Typically, we don't need all the information in these tables and thus, it makes sense to reduce the table. For that, we print out the column names first.
###Code
df_csv.keys()
###Output
_____no_output_____
###Markdown
We can then copy&paste the colum names we're interested in and create a new data frame.
###Code
df_analysis = df_csv[['area', 'mean_intensity']]
df_analysis
###Output
_____no_output_____
###Markdown
You can then access columns and add new columns.
###Code
df_analysis['total_intensity'] = df_analysis['area'] * df_analysis['mean_intensity']
df_analysis
###Output
C:\Users\rober\AppData\Local\Temp/ipykernel_20588/206920941.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df_analysis['total_intensity'] = df_analysis['area'] * df_analysis['mean_intensity']
###Markdown
ExerciseFor the loaded CSV file, create a table that only contains these columns:* `minor_axis_length`* `major_axis_length`* `aspect_ratio`
###Code
df_shape = pd.read_csv('../../data/blobs_statistics.csv')
###Output
_____no_output_____ |
Curso Tensorflow/Curso3-NaturalLanguageProcessing/semana3/Course_3_Week_3_Lesson_1c.ipynb | ###Markdown
Multiple Layer GRU
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow_datasets as tfds
import tensorflow as tf
print(tf.__version__)
import tensorflow_datasets as tfds
import tensorflow as tf
print(tf.__version__)
# Get the data
dataset, info = tfds.load('imdb_reviews/subwords8k', with_info=True, as_supervised=True)
train_dataset, test_dataset = dataset['train'], dataset['test']
tokenizer = info.features['text'].encoder
BUFFER_SIZE = 10000
BATCH_SIZE = 64
train_dataset = train_dataset.shuffle(BUFFER_SIZE)
train_dataset = train_dataset.padded_batch(BATCH_SIZE, train_dataset.output_shapes)
test_dataset = test_dataset.padded_batch(BATCH_SIZE, test_dataset.output_shapes)
model = tf.keras.Sequential([
tf.keras.layers.Embedding(tokenizer.vocab_size, 64),
tf.keras.layers.Conv1D(128, 5, activation='relu'),
tf.keras.layers.GlobalAveragePooling1D(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.summary()
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
NUM_EPOCHS = 10
history = model.fit(train_dataset, epochs=NUM_EPOCHS, validation_data=test_dataset)
import matplotlib.pyplot as plt
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string])
plt.xlabel("Epochs")
plt.ylabel(string)
plt.legend([string, 'val_'+string])
plt.show()
plot_graphs(history, 'accuracy')
plot_graphs(history, 'loss')
###Output
_____no_output_____ |
wine_classification.ipynb | ###Markdown
Dataset Load
###Code
import pandas as pd
import numpy as np
arquivo = pd.read_csv('wine_dataset.csv')
arquivo.head()
arquivo['style'] = arquivo['style'].replace('red', 0)
arquivo['style'] = arquivo['style'].replace('white', 1)
###Output
_____no_output_____
###Markdown
Separating Variables between Predictors and Target
###Code
y = arquivo['style']
x = arquivo.drop('style', axis = 1)
from sklearn.model_selection import train_test_split
#test dataset and train dataset
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.3)
from sklearn.ensemble import ExtraTreesClassifier
#model creation
model = ExtraTreesClassifier(n_estimators = 100)
model.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
Evaluation and Show Results
###Code
#showing results
results = model.score(x_test, y_test)
print("Accuracy:", results)
y_test[400:410]
x_test[400:410]
predictions = model.predict(x_test[400:410])
predictions
###Output
_____no_output_____
###Markdown
###Code
from sklearn import datasets
wine = datasets.load_wine()
wine.keys()
x_data = wine['data']
x_data.shape
y_data = wine['target']
y_data
wine quality datasets
###Output
_____no_output_____
###Markdown
Training
###Code
model.fit(x_data, y_data, epochs= 50, validation_split= 0.3)
###Output
Epoch 1/50
4/4 [==============================] - 1s 61ms/step - loss: 187.8489 - acc: 0.0000e+00 - val_loss: 79.6541 - val_acc: 0.0000e+00
Epoch 2/50
4/4 [==============================] - 0s 8ms/step - loss: 103.6071 - acc: 0.0000e+00 - val_loss: 52.8413 - val_acc: 0.0000e+00
Epoch 3/50
4/4 [==============================] - 0s 8ms/step - loss: 38.4694 - acc: 0.0323 - val_loss: 47.9590 - val_acc: 0.0000e+00
Epoch 4/50
4/4 [==============================] - 0s 8ms/step - loss: 11.2578 - acc: 0.4435 - val_loss: 59.2932 - val_acc: 0.0000e+00
Epoch 5/50
4/4 [==============================] - 0s 8ms/step - loss: 11.2262 - acc: 0.1371 - val_loss: 67.7191 - val_acc: 0.0000e+00
Epoch 6/50
4/4 [==============================] - 0s 9ms/step - loss: 10.4958 - acc: 0.2581 - val_loss: 81.0860 - val_acc: 0.0000e+00
Epoch 7/50
4/4 [==============================] - 0s 8ms/step - loss: 9.4065 - acc: 0.3871 - val_loss: 80.2390 - val_acc: 0.0000e+00
Epoch 8/50
4/4 [==============================] - 0s 9ms/step - loss: 8.3128 - acc: 0.1210 - val_loss: 83.4695 - val_acc: 0.0000e+00
Epoch 9/50
4/4 [==============================] - 0s 11ms/step - loss: 6.8257 - acc: 0.3065 - val_loss: 86.6589 - val_acc: 0.0000e+00
Epoch 10/50
4/4 [==============================] - 0s 9ms/step - loss: 5.6796 - acc: 0.2984 - val_loss: 84.8092 - val_acc: 0.0000e+00
Epoch 11/50
4/4 [==============================] - 0s 10ms/step - loss: 4.7022 - acc: 0.2097 - val_loss: 86.1862 - val_acc: 0.0000e+00
Epoch 12/50
4/4 [==============================] - 0s 9ms/step - loss: 3.5400 - acc: 0.2097 - val_loss: 85.9222 - val_acc: 0.0000e+00
Epoch 13/50
4/4 [==============================] - 0s 9ms/step - loss: 2.3589 - acc: 0.2984 - val_loss: 84.7075 - val_acc: 0.0000e+00
Epoch 14/50
4/4 [==============================] - 0s 9ms/step - loss: 1.3187 - acc: 0.3306 - val_loss: 84.1312 - val_acc: 0.1111
Epoch 15/50
4/4 [==============================] - 0s 10ms/step - loss: 0.6171 - acc: 0.6935 - val_loss: 84.4679 - val_acc: 0.1111
Epoch 16/50
4/4 [==============================] - 0s 8ms/step - loss: 0.3382 - acc: 0.8468 - val_loss: 84.8933 - val_acc: 0.0926
Epoch 17/50
4/4 [==============================] - 0s 9ms/step - loss: 0.2922 - acc: 0.8629 - val_loss: 84.9553 - val_acc: 0.1111
Epoch 18/50
4/4 [==============================] - 0s 9ms/step - loss: 0.2541 - acc: 0.9032 - val_loss: 85.0100 - val_acc: 0.1111
Epoch 19/50
4/4 [==============================] - 0s 10ms/step - loss: 0.2429 - acc: 0.8952 - val_loss: 85.1109 - val_acc: 0.1111
Epoch 20/50
4/4 [==============================] - 0s 9ms/step - loss: 0.2581 - acc: 0.9113 - val_loss: 85.1603 - val_acc: 0.1111
Epoch 21/50
4/4 [==============================] - 0s 9ms/step - loss: 0.2587 - acc: 0.8710 - val_loss: 85.2129 - val_acc: 0.1111
Epoch 22/50
4/4 [==============================] - 0s 9ms/step - loss: 0.2494 - acc: 0.9032 - val_loss: 85.2242 - val_acc: 0.1111
Epoch 23/50
4/4 [==============================] - 0s 9ms/step - loss: 0.2333 - acc: 0.8952 - val_loss: 85.2936 - val_acc: 0.1111
Epoch 24/50
4/4 [==============================] - 0s 10ms/step - loss: 0.2406 - acc: 0.9032 - val_loss: 85.2686 - val_acc: 0.1111
Epoch 25/50
4/4 [==============================] - 0s 9ms/step - loss: 0.2594 - acc: 0.8871 - val_loss: 85.2778 - val_acc: 0.1111
Epoch 26/50
4/4 [==============================] - 0s 8ms/step - loss: 0.2802 - acc: 0.8871 - val_loss: 85.2615 - val_acc: 0.1111
Epoch 27/50
4/4 [==============================] - 0s 9ms/step - loss: 0.2732 - acc: 0.8790 - val_loss: 85.2610 - val_acc: 0.1111
Epoch 28/50
4/4 [==============================] - 0s 9ms/step - loss: 0.3175 - acc: 0.8871 - val_loss: 85.2607 - val_acc: 0.1111
Epoch 29/50
4/4 [==============================] - 0s 13ms/step - loss: 0.3396 - acc: 0.8548 - val_loss: 85.2888 - val_acc: 0.1111
Epoch 30/50
4/4 [==============================] - 0s 9ms/step - loss: 0.3503 - acc: 0.8952 - val_loss: 85.2845 - val_acc: 0.0926
Epoch 31/50
4/4 [==============================] - 0s 10ms/step - loss: 0.3767 - acc: 0.8548 - val_loss: 85.3539 - val_acc: 0.1111
Epoch 32/50
4/4 [==============================] - 0s 11ms/step - loss: 0.3308 - acc: 0.9032 - val_loss: 85.3112 - val_acc: 0.0926
Epoch 33/50
4/4 [==============================] - 0s 9ms/step - loss: 0.3390 - acc: 0.8548 - val_loss: 85.2755 - val_acc: 0.1111
Epoch 34/50
4/4 [==============================] - 0s 10ms/step - loss: 0.2402 - acc: 0.9032 - val_loss: 85.2429 - val_acc: 0.1111
Epoch 35/50
4/4 [==============================] - 0s 11ms/step - loss: 0.2315 - acc: 0.9032 - val_loss: 85.2327 - val_acc: 0.1111
Epoch 36/50
4/4 [==============================] - 0s 10ms/step - loss: 0.2150 - acc: 0.9032 - val_loss: 85.2302 - val_acc: 0.1111
Epoch 37/50
4/4 [==============================] - 0s 10ms/step - loss: 0.2163 - acc: 0.9032 - val_loss: 85.2398 - val_acc: 0.1111
Epoch 38/50
4/4 [==============================] - 0s 10ms/step - loss: 0.2147 - acc: 0.9194 - val_loss: 85.2333 - val_acc: 0.1111
Epoch 39/50
4/4 [==============================] - 0s 12ms/step - loss: 0.2144 - acc: 0.9194 - val_loss: 85.2159 - val_acc: 0.1111
Epoch 40/50
4/4 [==============================] - 0s 9ms/step - loss: 0.2102 - acc: 0.9032 - val_loss: 85.2586 - val_acc: 0.1111
Epoch 41/50
4/4 [==============================] - 0s 14ms/step - loss: 0.2522 - acc: 0.9032 - val_loss: 85.2341 - val_acc: 0.1111
Epoch 42/50
4/4 [==============================] - 0s 9ms/step - loss: 0.2769 - acc: 0.8952 - val_loss: 85.2138 - val_acc: 0.1111
Epoch 43/50
4/4 [==============================] - 0s 9ms/step - loss: 0.2953 - acc: 0.8629 - val_loss: 85.2929 - val_acc: 0.1111
Epoch 44/50
4/4 [==============================] - 0s 10ms/step - loss: 0.2455 - acc: 0.9113 - val_loss: 85.2324 - val_acc: 0.1111
Epoch 45/50
4/4 [==============================] - 0s 9ms/step - loss: 0.2553 - acc: 0.8952 - val_loss: 85.3395 - val_acc: 0.1111
Epoch 46/50
4/4 [==============================] - 0s 13ms/step - loss: 0.2718 - acc: 0.9032 - val_loss: 85.2272 - val_acc: 0.1111
Epoch 47/50
4/4 [==============================] - 0s 11ms/step - loss: 0.2202 - acc: 0.9032 - val_loss: 85.2929 - val_acc: 0.1111
Epoch 48/50
4/4 [==============================] - 0s 12ms/step - loss: 0.2507 - acc: 0.8871 - val_loss: 85.3058 - val_acc: 0.0926
Epoch 49/50
4/4 [==============================] - 0s 11ms/step - loss: 0.2605 - acc: 0.8952 - val_loss: 85.3099 - val_acc: 0.1111
Epoch 50/50
4/4 [==============================] - 0s 11ms/step - loss: 0.2033 - acc: 0.8952 - val_loss: 85.2963 - val_acc: 0.0926
###Markdown
Evaluation
###Code
model.evaluate(x_data, y_data)
###Output
6/6 [==============================] - 0s 3ms/step - loss: 26.0794 - acc: 0.6348
###Markdown
Service
###Code
x_data[25], y_data[25]
pred = model.predict([[1.305e+01, 2.050e+00, 3.220e+00, 2.500e+01, 1.240e+02, 2.630e+00,
2.680e+00, 4.700e-01, 1.920e+00, 3.580e+00, 1.130e+00, 3.200e+00,
8.300e+02]])
pred
import numpy as np
np.argmax(pred)
###Output
_____no_output_____
###Markdown
###Code
from sklearn import datasets
wine = datasets.load_wine()
wine.keys()
x_data = wine['data']
x_data.shape
y_data = wine['target']
y_data
import pandas as pd
df_wine = pd.DataFrame(wine.data)
df_wine.info()
df_twine = pd.DataFrame(wine.target)
df_twine
df_wine['y_col'] = df_twine
import sqlite3
connect = sqlite3.connect('./db.sqlite3')
df_wine.to_sql('datax_resource', connect, if_exists='append', index=False)
df_twine.to_sql('datay_resource', connect, if_exists='append', index=False)
db_wine = pd.read_sql_query('select * from datax_resource',connect)
db_wine.head(4)
# x_data = wine['data']
# x_data.shape
dfnpx = df_wine.iloc[:,[0,1,2,3,4,5,6,7,8,9,10,11,12]].to_numpy()
#dfnpx = df_wine.drop('y_col',axis=1).to_numpy()
dfnpx.shape
dfnpy = df_wine.loc[:,'y_col']
dfnpy
import numpy as np
# y_data
# y_data, np.unique(y_data)
dfnpy, np.unique(dfnpy)
import tensorflow as tf
model = tf.keras.Sequential()
model.add(tf.keras.Input(shape=(13,))) # input layer
model.add(tf.keras.layers.Dense(64, activation='relu')) # hidden layer
model.add(tf.keras.layers.Dense(64, activation='relu')) # hidden layer
model.add(tf.keras.layers.Dense(3, activation='softmax')) # output layer
model.compile(optimizer='adam', loss = 'sparse_categorical_crossentropy',metrics=['acc'])
model.summary()
model.fit(dfnpx, dfnpy, epochs=50, validation_split=0.3)
###Output
Epoch 1/50
4/4 [==============================] - 1s 63ms/step - loss: 223.9878 - acc: 0.4194 - val_loss: 0.4822 - val_acc: 0.8889
Epoch 2/50
4/4 [==============================] - 0s 8ms/step - loss: 158.6598 - acc: 0.1855 - val_loss: 9.2015 - val_acc: 0.1111
Epoch 3/50
4/4 [==============================] - 0s 11ms/step - loss: 96.0259 - acc: 0.5242 - val_loss: 2.1362 - val_acc: 0.1111
Epoch 4/50
4/4 [==============================] - 0s 8ms/step - loss: 35.0275 - acc: 0.1774 - val_loss: 6.4503 - val_acc: 0.0000e+00
Epoch 5/50
4/4 [==============================] - 0s 9ms/step - loss: 10.4022 - acc: 0.4355 - val_loss: 57.2656 - val_acc: 0.0000e+00
Epoch 6/50
4/4 [==============================] - 0s 8ms/step - loss: 17.6420 - acc: 0.4758 - val_loss: 79.3170 - val_acc: 0.0000e+00
Epoch 7/50
4/4 [==============================] - 0s 50ms/step - loss: 11.2841 - acc: 0.4758 - val_loss: 82.4853 - val_acc: 0.0000e+00
Epoch 8/50
4/4 [==============================] - 0s 9ms/step - loss: 6.3667 - acc: 0.5242 - val_loss: 96.7073 - val_acc: 0.1111
Epoch 9/50
4/4 [==============================] - 0s 8ms/step - loss: 3.9942 - acc: 0.4516 - val_loss: 103.1479 - val_acc: 0.0000e+00
Epoch 10/50
4/4 [==============================] - 0s 8ms/step - loss: 6.2383 - acc: 0.4758 - val_loss: 107.3301 - val_acc: 0.0000e+00
Epoch 11/50
4/4 [==============================] - 0s 8ms/step - loss: 2.5474 - acc: 0.5645 - val_loss: 109.6605 - val_acc: 0.1111
Epoch 12/50
4/4 [==============================] - 0s 8ms/step - loss: 2.6263 - acc: 0.5645 - val_loss: 109.6625 - val_acc: 0.0000e+00
Epoch 13/50
4/4 [==============================] - 0s 9ms/step - loss: 2.6345 - acc: 0.4839 - val_loss: 109.9138 - val_acc: 0.0370
Epoch 14/50
4/4 [==============================] - 0s 11ms/step - loss: 0.9100 - acc: 0.6371 - val_loss: 111.6703 - val_acc: 0.1111
Epoch 15/50
4/4 [==============================] - 0s 10ms/step - loss: 0.6129 - acc: 0.7419 - val_loss: 111.1160 - val_acc: 0.0926
Epoch 16/50
4/4 [==============================] - 0s 10ms/step - loss: 0.4698 - acc: 0.7984 - val_loss: 113.1865 - val_acc: 0.1111
Epoch 17/50
4/4 [==============================] - 0s 16ms/step - loss: 0.3478 - acc: 0.8468 - val_loss: 112.0447 - val_acc: 0.0926
Epoch 18/50
4/4 [==============================] - 0s 9ms/step - loss: 0.4489 - acc: 0.8387 - val_loss: 112.9501 - val_acc: 0.1111
Epoch 19/50
4/4 [==============================] - 0s 9ms/step - loss: 0.3390 - acc: 0.8710 - val_loss: 113.5653 - val_acc: 0.1111
Epoch 20/50
4/4 [==============================] - 0s 10ms/step - loss: 0.2806 - acc: 0.8952 - val_loss: 112.6982 - val_acc: 0.0926
Epoch 21/50
4/4 [==============================] - 0s 9ms/step - loss: 0.3031 - acc: 0.8790 - val_loss: 113.8590 - val_acc: 0.1111
Epoch 22/50
4/4 [==============================] - 0s 9ms/step - loss: 0.2611 - acc: 0.9113 - val_loss: 112.7779 - val_acc: 0.1111
Epoch 23/50
4/4 [==============================] - 0s 8ms/step - loss: 0.2790 - acc: 0.8387 - val_loss: 113.1525 - val_acc: 0.1111
Epoch 24/50
4/4 [==============================] - 0s 9ms/step - loss: 0.2321 - acc: 0.9113 - val_loss: 112.9618 - val_acc: 0.1111
Epoch 25/50
4/4 [==============================] - 0s 9ms/step - loss: 0.2363 - acc: 0.8710 - val_loss: 112.8883 - val_acc: 0.1111
Epoch 26/50
4/4 [==============================] - 0s 14ms/step - loss: 0.2530 - acc: 0.9113 - val_loss: 112.9004 - val_acc: 0.1111
Epoch 27/50
4/4 [==============================] - 0s 9ms/step - loss: 0.2776 - acc: 0.8548 - val_loss: 112.7781 - val_acc: 0.1111
Epoch 28/50
4/4 [==============================] - 0s 10ms/step - loss: 0.2319 - acc: 0.9032 - val_loss: 113.2516 - val_acc: 0.1111
Epoch 29/50
4/4 [==============================] - 0s 11ms/step - loss: 0.2434 - acc: 0.8952 - val_loss: 112.7431 - val_acc: 0.1111
Epoch 30/50
4/4 [==============================] - 0s 10ms/step - loss: 0.2701 - acc: 0.8952 - val_loss: 112.8836 - val_acc: 0.1111
Epoch 31/50
4/4 [==============================] - 0s 9ms/step - loss: 0.2560 - acc: 0.8629 - val_loss: 112.9641 - val_acc: 0.1111
Epoch 32/50
4/4 [==============================] - 0s 9ms/step - loss: 0.2356 - acc: 0.8952 - val_loss: 112.9692 - val_acc: 0.1111
Epoch 33/50
4/4 [==============================] - 0s 11ms/step - loss: 0.2294 - acc: 0.8710 - val_loss: 112.8882 - val_acc: 0.1111
Epoch 34/50
4/4 [==============================] - 0s 11ms/step - loss: 0.2270 - acc: 0.8952 - val_loss: 113.0159 - val_acc: 0.1111
Epoch 35/50
4/4 [==============================] - 0s 10ms/step - loss: 0.2562 - acc: 0.8548 - val_loss: 113.0069 - val_acc: 0.1111
Epoch 36/50
4/4 [==============================] - 0s 8ms/step - loss: 0.2601 - acc: 0.8952 - val_loss: 113.0844 - val_acc: 0.1111
Epoch 37/50
4/4 [==============================] - 0s 10ms/step - loss: 0.2204 - acc: 0.8710 - val_loss: 112.7568 - val_acc: 0.1111
Epoch 38/50
4/4 [==============================] - 0s 9ms/step - loss: 0.2508 - acc: 0.9032 - val_loss: 113.4738 - val_acc: 0.1111
Epoch 39/50
4/4 [==============================] - 0s 10ms/step - loss: 0.2124 - acc: 0.8790 - val_loss: 112.6100 - val_acc: 0.1111
Epoch 40/50
4/4 [==============================] - 0s 10ms/step - loss: 0.2670 - acc: 0.8548 - val_loss: 113.7163 - val_acc: 0.1111
Epoch 41/50
4/4 [==============================] - 0s 9ms/step - loss: 0.2652 - acc: 0.8790 - val_loss: 112.5699 - val_acc: 0.1111
Epoch 42/50
4/4 [==============================] - 0s 10ms/step - loss: 0.2544 - acc: 0.8871 - val_loss: 113.1320 - val_acc: 0.1111
Epoch 43/50
4/4 [==============================] - 0s 9ms/step - loss: 0.2346 - acc: 0.8871 - val_loss: 113.1256 - val_acc: 0.1111
Epoch 44/50
4/4 [==============================] - 0s 12ms/step - loss: 0.2591 - acc: 0.9194 - val_loss: 112.8154 - val_acc: 0.1111
Epoch 45/50
4/4 [==============================] - 0s 10ms/step - loss: 0.2482 - acc: 0.8710 - val_loss: 112.9925 - val_acc: 0.1111
Epoch 46/50
4/4 [==============================] - 0s 10ms/step - loss: 0.2384 - acc: 0.9194 - val_loss: 112.9531 - val_acc: 0.1111
Epoch 47/50
4/4 [==============================] - 0s 9ms/step - loss: 0.2321 - acc: 0.8629 - val_loss: 112.9210 - val_acc: 0.1111
Epoch 48/50
4/4 [==============================] - 0s 9ms/step - loss: 0.2210 - acc: 0.9194 - val_loss: 112.8476 - val_acc: 0.1111
Epoch 49/50
4/4 [==============================] - 0s 9ms/step - loss: 0.2437 - acc: 0.8548 - val_loss: 113.0876 - val_acc: 0.1111
Epoch 50/50
4/4 [==============================] - 0s 8ms/step - loss: 0.2582 - acc: 0.9032 - val_loss: 112.6023 - val_acc: 0.1111
###Markdown
Evaluation
###Code
model.evaluate(dfnpx, dfnpy)
###Output
6/6 [==============================] - 0s 2ms/step - loss: 34.3256 - acc: 0.6348
###Markdown
Service
###Code
dfnpx[25], dfnpy[25]
pred = model.predict([[1.305e+01, 2.050e+00, 3.220e+00, 2.500e+01, 1.240e+02, 2.630e+00,
2.680e+00, 4.700e-01, 1.920e+00, 3.580e+00, 1.130e+00, 3.200e+00,
8.300e+02]])
pred
np.argmax(pred)
###Output
_____no_output_____
###Markdown
교육단계
###Code
import tensorflow as tf
model = tf.keras.Sequential()
model.add(tf.keras.Input(shape=13,))
model.add(tf.keras.layers.Dense(64, activation='relu'))
model.add(tf.keras.layers.Dense(36, activation='relu'))
model.add(tf.keras.layers.Dense(3, activation='softmax'))
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['acc'])
model.summary()
model.fit(x_data, y_data, epochs=500, validation_split=0.3)
###Output
Epoch 1/500
4/4 [==============================] - 1s 49ms/step - loss: 88.0895 - acc: 0.5242 - val_loss: 44.9187 - val_acc: 0.1111
Epoch 2/500
4/4 [==============================] - 0s 10ms/step - loss: 27.8121 - acc: 0.5323 - val_loss: 17.4869 - val_acc: 0.0000e+00
Epoch 3/500
4/4 [==============================] - 0s 9ms/step - loss: 11.2875 - acc: 0.4758 - val_loss: 28.0420 - val_acc: 0.0000e+00
Epoch 4/500
4/4 [==============================] - 0s 8ms/step - loss: 19.7956 - acc: 0.4758 - val_loss: 29.3293 - val_acc: 0.0000e+00
Epoch 5/500
4/4 [==============================] - 0s 9ms/step - loss: 16.6995 - acc: 0.4758 - val_loss: 25.2862 - val_acc: 0.0000e+00
Epoch 6/500
4/4 [==============================] - 0s 10ms/step - loss: 6.7616 - acc: 0.5081 - val_loss: 19.3403 - val_acc: 0.0926
Epoch 7/500
4/4 [==============================] - 0s 13ms/step - loss: 3.5058 - acc: 0.6371 - val_loss: 26.2689 - val_acc: 0.1111
Epoch 8/500
4/4 [==============================] - 0s 10ms/step - loss: 2.9290 - acc: 0.6613 - val_loss: 20.4616 - val_acc: 0.0926
Epoch 9/500
4/4 [==============================] - 0s 9ms/step - loss: 2.0040 - acc: 0.7177 - val_loss: 21.5534 - val_acc: 0.0926
Epoch 10/500
4/4 [==============================] - 0s 9ms/step - loss: 1.9496 - acc: 0.7258 - val_loss: 21.1141 - val_acc: 0.0926
Epoch 11/500
4/4 [==============================] - 0s 8ms/step - loss: 0.4554 - acc: 0.8871 - val_loss: 25.0250 - val_acc: 0.1111
Epoch 12/500
4/4 [==============================] - 0s 9ms/step - loss: 0.7784 - acc: 0.7984 - val_loss: 22.3460 - val_acc: 0.1111
Epoch 13/500
4/4 [==============================] - 0s 9ms/step - loss: 0.3494 - acc: 0.9113 - val_loss: 21.3864 - val_acc: 0.0926
Epoch 14/500
4/4 [==============================] - 0s 9ms/step - loss: 0.6451 - acc: 0.8468 - val_loss: 21.8175 - val_acc: 0.1111
Epoch 15/500
4/4 [==============================] - 0s 13ms/step - loss: 0.3166 - acc: 0.9113 - val_loss: 23.5334 - val_acc: 0.1111
Epoch 16/500
4/4 [==============================] - 0s 10ms/step - loss: 0.4590 - acc: 0.9032 - val_loss: 23.5305 - val_acc: 0.1111
Epoch 17/500
4/4 [==============================] - 0s 9ms/step - loss: 0.3305 - acc: 0.9274 - val_loss: 22.2175 - val_acc: 0.1111
Epoch 18/500
4/4 [==============================] - 0s 9ms/step - loss: 0.3197 - acc: 0.8790 - val_loss: 21.9743 - val_acc: 0.1111
Epoch 19/500
4/4 [==============================] - 0s 9ms/step - loss: 0.3193 - acc: 0.8952 - val_loss: 22.5763 - val_acc: 0.1111
Epoch 20/500
4/4 [==============================] - 0s 9ms/step - loss: 0.2884 - acc: 0.9194 - val_loss: 22.8109 - val_acc: 0.1111
Epoch 21/500
4/4 [==============================] - 0s 8ms/step - loss: 0.3256 - acc: 0.9113 - val_loss: 22.2631 - val_acc: 0.1111
Epoch 22/500
4/4 [==============================] - 0s 9ms/step - loss: 0.2850 - acc: 0.8952 - val_loss: 22.3036 - val_acc: 0.1111
Epoch 23/500
4/4 [==============================] - 0s 10ms/step - loss: 0.2718 - acc: 0.9194 - val_loss: 22.7040 - val_acc: 0.1111
Epoch 24/500
4/4 [==============================] - 0s 9ms/step - loss: 0.2841 - acc: 0.9113 - val_loss: 22.3723 - val_acc: 0.1111
Epoch 25/500
4/4 [==============================] - 0s 10ms/step - loss: 0.2682 - acc: 0.9194 - val_loss: 22.1579 - val_acc: 0.1111
Epoch 26/500
4/4 [==============================] - 0s 12ms/step - loss: 0.2616 - acc: 0.9194 - val_loss: 22.3841 - val_acc: 0.1111
Epoch 27/500
4/4 [==============================] - 0s 12ms/step - loss: 0.2578 - acc: 0.9194 - val_loss: 22.3705 - val_acc: 0.1111
Epoch 28/500
4/4 [==============================] - 0s 12ms/step - loss: 0.2498 - acc: 0.9194 - val_loss: 22.0826 - val_acc: 0.1111
Epoch 29/500
4/4 [==============================] - 0s 13ms/step - loss: 0.2496 - acc: 0.9113 - val_loss: 22.1504 - val_acc: 0.1111
Epoch 30/500
4/4 [==============================] - 0s 13ms/step - loss: 0.2387 - acc: 0.9274 - val_loss: 22.4263 - val_acc: 0.1111
Epoch 31/500
4/4 [==============================] - 0s 19ms/step - loss: 0.2609 - acc: 0.9194 - val_loss: 22.3894 - val_acc: 0.1111
Epoch 32/500
4/4 [==============================] - 0s 13ms/step - loss: 0.2433 - acc: 0.9032 - val_loss: 21.7623 - val_acc: 0.1111
Epoch 33/500
4/4 [==============================] - 0s 11ms/step - loss: 0.2756 - acc: 0.8629 - val_loss: 22.2146 - val_acc: 0.1111
Epoch 34/500
4/4 [==============================] - 0s 11ms/step - loss: 0.2810 - acc: 0.9274 - val_loss: 22.6072 - val_acc: 0.1111
Epoch 35/500
4/4 [==============================] - 0s 12ms/step - loss: 0.2426 - acc: 0.9113 - val_loss: 21.8536 - val_acc: 0.1111
Epoch 36/500
4/4 [==============================] - 0s 19ms/step - loss: 0.2707 - acc: 0.8871 - val_loss: 22.0383 - val_acc: 0.1111
Epoch 37/500
4/4 [==============================] - 0s 13ms/step - loss: 0.2273 - acc: 0.9355 - val_loss: 22.0788 - val_acc: 0.1111
Epoch 38/500
4/4 [==============================] - 0s 16ms/step - loss: 0.2278 - acc: 0.9274 - val_loss: 22.1174 - val_acc: 0.1111
Epoch 39/500
4/4 [==============================] - 0s 13ms/step - loss: 0.2225 - acc: 0.9355 - val_loss: 21.9247 - val_acc: 0.1111
Epoch 40/500
4/4 [==============================] - 0s 13ms/step - loss: 0.2328 - acc: 0.9113 - val_loss: 21.9102 - val_acc: 0.1111
Epoch 41/500
4/4 [==============================] - 0s 17ms/step - loss: 0.2565 - acc: 0.9194 - val_loss: 22.2888 - val_acc: 0.1111
Epoch 42/500
4/4 [==============================] - 0s 11ms/step - loss: 0.2202 - acc: 0.9274 - val_loss: 21.7074 - val_acc: 0.1111
Epoch 43/500
4/4 [==============================] - 0s 11ms/step - loss: 0.2309 - acc: 0.9113 - val_loss: 21.9712 - val_acc: 0.1111
Epoch 44/500
4/4 [==============================] - 0s 13ms/step - loss: 0.2240 - acc: 0.9194 - val_loss: 21.9916 - val_acc: 0.1111
Epoch 45/500
4/4 [==============================] - 0s 11ms/step - loss: 0.2141 - acc: 0.9194 - val_loss: 21.7766 - val_acc: 0.1111
Epoch 46/500
4/4 [==============================] - 0s 12ms/step - loss: 0.2539 - acc: 0.8871 - val_loss: 21.5833 - val_acc: 0.1111
Epoch 47/500
4/4 [==============================] - 0s 11ms/step - loss: 0.1858 - acc: 0.9194 - val_loss: 22.2768 - val_acc: 0.1111
Epoch 48/500
4/4 [==============================] - 0s 11ms/step - loss: 0.2482 - acc: 0.9194 - val_loss: 21.9953 - val_acc: 0.1111
Epoch 49/500
4/4 [==============================] - 0s 12ms/step - loss: 0.2246 - acc: 0.9355 - val_loss: 21.5596 - val_acc: 0.1111
Epoch 50/500
4/4 [==============================] - 0s 14ms/step - loss: 0.2282 - acc: 0.9113 - val_loss: 21.7962 - val_acc: 0.1111
Epoch 51/500
4/4 [==============================] - 0s 12ms/step - loss: 0.2043 - acc: 0.9355 - val_loss: 21.6658 - val_acc: 0.1111
Epoch 52/500
4/4 [==============================] - 0s 15ms/step - loss: 0.2107 - acc: 0.9355 - val_loss: 21.7480 - val_acc: 0.1111
Epoch 53/500
4/4 [==============================] - 0s 13ms/step - loss: 0.2152 - acc: 0.9194 - val_loss: 21.5078 - val_acc: 0.1111
Epoch 54/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1992 - acc: 0.9194 - val_loss: 21.6189 - val_acc: 0.1111
Epoch 55/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1939 - acc: 0.9355 - val_loss: 21.7133 - val_acc: 0.1111
Epoch 56/500
4/4 [==============================] - 0s 13ms/step - loss: 0.2061 - acc: 0.9194 - val_loss: 21.6238 - val_acc: 0.1111
Epoch 57/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1962 - acc: 0.9113 - val_loss: 21.3574 - val_acc: 0.1111
Epoch 58/500
4/4 [==============================] - 0s 17ms/step - loss: 0.2105 - acc: 0.9194 - val_loss: 21.9298 - val_acc: 0.1111
Epoch 59/500
4/4 [==============================] - 0s 18ms/step - loss: 0.2122 - acc: 0.9274 - val_loss: 21.5102 - val_acc: 0.1111
Epoch 60/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1924 - acc: 0.9274 - val_loss: 21.4772 - val_acc: 0.1111
Epoch 61/500
4/4 [==============================] - 0s 11ms/step - loss: 0.2099 - acc: 0.9194 - val_loss: 21.6222 - val_acc: 0.1111
Epoch 62/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1713 - acc: 0.9274 - val_loss: 21.2601 - val_acc: 0.1111
Epoch 63/500
4/4 [==============================] - 0s 14ms/step - loss: 0.2026 - acc: 0.8952 - val_loss: 21.5318 - val_acc: 0.1111
Epoch 64/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1883 - acc: 0.9274 - val_loss: 21.6434 - val_acc: 0.1111
Epoch 65/500
4/4 [==============================] - 0s 13ms/step - loss: 0.2078 - acc: 0.8952 - val_loss: 21.2435 - val_acc: 0.1111
Epoch 66/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1931 - acc: 0.9355 - val_loss: 21.8218 - val_acc: 0.1111
Epoch 67/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1805 - acc: 0.9274 - val_loss: 21.2533 - val_acc: 0.1111
Epoch 68/500
4/4 [==============================] - 0s 12ms/step - loss: 0.2351 - acc: 0.8790 - val_loss: 21.5504 - val_acc: 0.1111
Epoch 69/500
4/4 [==============================] - 0s 11ms/step - loss: 0.2886 - acc: 0.9032 - val_loss: 21.5441 - val_acc: 0.1111
Epoch 70/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1786 - acc: 0.8952 - val_loss: 21.1444 - val_acc: 0.1111
Epoch 71/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1697 - acc: 0.9274 - val_loss: 21.6692 - val_acc: 0.1111
Epoch 72/500
4/4 [==============================] - 0s 13ms/step - loss: 0.2074 - acc: 0.9274 - val_loss: 21.3092 - val_acc: 0.1111
Epoch 73/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1749 - acc: 0.9274 - val_loss: 21.5006 - val_acc: 0.1111
Epoch 74/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1754 - acc: 0.9194 - val_loss: 21.2028 - val_acc: 0.1111
Epoch 75/500
4/4 [==============================] - 0s 11ms/step - loss: 0.1780 - acc: 0.9274 - val_loss: 21.4099 - val_acc: 0.1111
Epoch 76/500
4/4 [==============================] - 0s 11ms/step - loss: 0.1729 - acc: 0.9355 - val_loss: 21.5081 - val_acc: 0.1111
Epoch 77/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1703 - acc: 0.9355 - val_loss: 21.2298 - val_acc: 0.1111
Epoch 78/500
4/4 [==============================] - 0s 14ms/step - loss: 0.2058 - acc: 0.9032 - val_loss: 21.6193 - val_acc: 0.1111
Epoch 79/500
4/4 [==============================] - 0s 18ms/step - loss: 0.1737 - acc: 0.9274 - val_loss: 21.5686 - val_acc: 0.1111
Epoch 80/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1558 - acc: 0.9194 - val_loss: 21.1619 - val_acc: 0.1111
Epoch 81/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1916 - acc: 0.9194 - val_loss: 21.3106 - val_acc: 0.1111
Epoch 82/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1760 - acc: 0.9274 - val_loss: 21.3294 - val_acc: 0.1111
Epoch 83/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1937 - acc: 0.9435 - val_loss: 21.4200 - val_acc: 0.1111
Epoch 84/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1792 - acc: 0.9274 - val_loss: 21.2477 - val_acc: 0.1111
Epoch 85/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1954 - acc: 0.9355 - val_loss: 21.4327 - val_acc: 0.1111
Epoch 86/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1867 - acc: 0.9194 - val_loss: 21.3561 - val_acc: 0.1111
Epoch 87/500
4/4 [==============================] - 0s 12ms/step - loss: 0.2196 - acc: 0.9194 - val_loss: 21.5218 - val_acc: 0.1111
Epoch 88/500
4/4 [==============================] - 0s 12ms/step - loss: 0.2235 - acc: 0.8871 - val_loss: 21.3326 - val_acc: 0.1111
Epoch 89/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1612 - acc: 0.9274 - val_loss: 21.5469 - val_acc: 0.1111
Epoch 90/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1380 - acc: 0.9355 - val_loss: 21.1777 - val_acc: 0.1111
Epoch 91/500
4/4 [==============================] - 0s 11ms/step - loss: 0.1856 - acc: 0.9113 - val_loss: 21.7477 - val_acc: 0.1111
Epoch 92/500
4/4 [==============================] - 0s 11ms/step - loss: 0.2432 - acc: 0.9194 - val_loss: 21.1925 - val_acc: 0.1111
Epoch 93/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1797 - acc: 0.9274 - val_loss: 21.4856 - val_acc: 0.1111
Epoch 94/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1582 - acc: 0.9355 - val_loss: 21.4970 - val_acc: 0.1111
Epoch 95/500
4/4 [==============================] - 0s 18ms/step - loss: 0.1725 - acc: 0.9194 - val_loss: 21.5384 - val_acc: 0.1111
Epoch 96/500
4/4 [==============================] - 0s 11ms/step - loss: 0.1683 - acc: 0.9355 - val_loss: 21.5464 - val_acc: 0.1111
Epoch 97/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1557 - acc: 0.9274 - val_loss: 21.2838 - val_acc: 0.1111
Epoch 98/500
4/4 [==============================] - 0s 19ms/step - loss: 0.1795 - acc: 0.8952 - val_loss: 21.4964 - val_acc: 0.1111
Epoch 99/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1475 - acc: 0.9355 - val_loss: 21.4035 - val_acc: 0.1111
Epoch 100/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1673 - acc: 0.9274 - val_loss: 21.9672 - val_acc: 0.1111
Epoch 101/500
4/4 [==============================] - 0s 14ms/step - loss: 0.2314 - acc: 0.9194 - val_loss: 21.3069 - val_acc: 0.1111
Epoch 102/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1781 - acc: 0.9113 - val_loss: 22.0357 - val_acc: 0.1111
Epoch 103/500
4/4 [==============================] - 0s 11ms/step - loss: 0.2647 - acc: 0.9194 - val_loss: 21.2363 - val_acc: 0.1111
Epoch 104/500
4/4 [==============================] - 0s 12ms/step - loss: 0.2162 - acc: 0.8790 - val_loss: 22.1087 - val_acc: 0.1111
Epoch 105/500
4/4 [==============================] - 0s 11ms/step - loss: 0.2112 - acc: 0.9032 - val_loss: 21.4087 - val_acc: 0.1111
Epoch 106/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1993 - acc: 0.8952 - val_loss: 22.0718 - val_acc: 0.1111
Epoch 107/500
4/4 [==============================] - 0s 14ms/step - loss: 0.2573 - acc: 0.9113 - val_loss: 21.4072 - val_acc: 0.1111
Epoch 108/500
4/4 [==============================] - 0s 13ms/step - loss: 0.3091 - acc: 0.8710 - val_loss: 22.2220 - val_acc: 0.1111
Epoch 109/500
4/4 [==============================] - 0s 13ms/step - loss: 0.2545 - acc: 0.9113 - val_loss: 21.4955 - val_acc: 0.1111
Epoch 110/500
4/4 [==============================] - 0s 12ms/step - loss: 0.3231 - acc: 0.8790 - val_loss: 21.9732 - val_acc: 0.1111
Epoch 111/500
4/4 [==============================] - 0s 13ms/step - loss: 0.3219 - acc: 0.9032 - val_loss: 21.7197 - val_acc: 0.1111
Epoch 112/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1993 - acc: 0.9113 - val_loss: 21.8521 - val_acc: 0.1111
Epoch 113/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1821 - acc: 0.9274 - val_loss: 22.3063 - val_acc: 0.1111
Epoch 114/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1785 - acc: 0.9355 - val_loss: 21.8370 - val_acc: 0.1111
Epoch 115/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1460 - acc: 0.9355 - val_loss: 22.2894 - val_acc: 0.1111
Epoch 116/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1918 - acc: 0.9032 - val_loss: 21.8155 - val_acc: 0.1111
Epoch 117/500
4/4 [==============================] - 0s 11ms/step - loss: 0.1279 - acc: 0.9597 - val_loss: 22.5263 - val_acc: 0.1111
Epoch 118/500
4/4 [==============================] - 0s 11ms/step - loss: 0.1772 - acc: 0.9355 - val_loss: 21.8381 - val_acc: 0.1111
Epoch 119/500
4/4 [==============================] - 0s 16ms/step - loss: 0.1576 - acc: 0.9274 - val_loss: 22.4176 - val_acc: 0.1111
Epoch 120/500
4/4 [==============================] - 0s 11ms/step - loss: 0.1707 - acc: 0.9355 - val_loss: 21.7391 - val_acc: 0.1111
Epoch 121/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1938 - acc: 0.9113 - val_loss: 22.9078 - val_acc: 0.1111
Epoch 122/500
4/4 [==============================] - 0s 14ms/step - loss: 0.2612 - acc: 0.9113 - val_loss: 21.6676 - val_acc: 0.1111
Epoch 123/500
4/4 [==============================] - 0s 12ms/step - loss: 0.2580 - acc: 0.9032 - val_loss: 22.5915 - val_acc: 0.1111
Epoch 124/500
4/4 [==============================] - 0s 14ms/step - loss: 0.2431 - acc: 0.9274 - val_loss: 21.8023 - val_acc: 0.1111
Epoch 125/500
4/4 [==============================] - 0s 13ms/step - loss: 0.2169 - acc: 0.9194 - val_loss: 22.4423 - val_acc: 0.1111
Epoch 126/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1774 - acc: 0.9274 - val_loss: 22.1615 - val_acc: 0.1111
Epoch 127/500
4/4 [==============================] - 0s 11ms/step - loss: 0.1551 - acc: 0.9194 - val_loss: 22.0160 - val_acc: 0.1111
Epoch 128/500
4/4 [==============================] - 0s 9ms/step - loss: 0.1338 - acc: 0.9435 - val_loss: 22.4843 - val_acc: 0.1111
Epoch 129/500
4/4 [==============================] - 0s 10ms/step - loss: 0.1544 - acc: 0.9435 - val_loss: 21.9855 - val_acc: 0.1111
Epoch 130/500
4/4 [==============================] - 0s 9ms/step - loss: 0.1621 - acc: 0.9355 - val_loss: 22.2915 - val_acc: 0.1111
Epoch 131/500
4/4 [==============================] - 0s 10ms/step - loss: 0.1284 - acc: 0.9516 - val_loss: 21.9777 - val_acc: 0.1111
Epoch 132/500
4/4 [==============================] - 0s 11ms/step - loss: 0.1731 - acc: 0.9355 - val_loss: 22.2912 - val_acc: 0.1111
Epoch 133/500
4/4 [==============================] - 0s 10ms/step - loss: 0.1477 - acc: 0.9435 - val_loss: 21.9856 - val_acc: 0.1111
Epoch 134/500
4/4 [==============================] - 0s 9ms/step - loss: 0.1864 - acc: 0.9113 - val_loss: 22.3778 - val_acc: 0.1111
Epoch 135/500
4/4 [==============================] - 0s 8ms/step - loss: 0.1695 - acc: 0.9194 - val_loss: 22.0602 - val_acc: 0.1111
Epoch 136/500
4/4 [==============================] - 0s 9ms/step - loss: 0.1317 - acc: 0.9435 - val_loss: 22.6160 - val_acc: 0.1111
Epoch 137/500
4/4 [==============================] - 0s 10ms/step - loss: 0.1366 - acc: 0.9435 - val_loss: 22.0511 - val_acc: 0.1111
Epoch 138/500
4/4 [==============================] - 0s 10ms/step - loss: 0.1661 - acc: 0.9194 - val_loss: 22.6566 - val_acc: 0.1111
Epoch 139/500
4/4 [==============================] - 0s 10ms/step - loss: 0.1641 - acc: 0.9032 - val_loss: 22.0480 - val_acc: 0.1111
Epoch 140/500
4/4 [==============================] - 0s 10ms/step - loss: 0.1191 - acc: 0.9516 - val_loss: 22.5797 - val_acc: 0.1111
Epoch 141/500
4/4 [==============================] - 0s 9ms/step - loss: 0.1450 - acc: 0.9435 - val_loss: 21.9700 - val_acc: 0.1111
Epoch 142/500
4/4 [==============================] - 0s 9ms/step - loss: 0.1604 - acc: 0.9274 - val_loss: 22.5089 - val_acc: 0.1111
Epoch 143/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1668 - acc: 0.9274 - val_loss: 22.1091 - val_acc: 0.1111
Epoch 144/500
4/4 [==============================] - 0s 10ms/step - loss: 0.2306 - acc: 0.9032 - val_loss: 22.7334 - val_acc: 0.1111
Epoch 145/500
4/4 [==============================] - 0s 9ms/step - loss: 0.1943 - acc: 0.9194 - val_loss: 22.0690 - val_acc: 0.1111
Epoch 146/500
4/4 [==============================] - 0s 10ms/step - loss: 0.2046 - acc: 0.8952 - val_loss: 22.4677 - val_acc: 0.1111
Epoch 147/500
4/4 [==============================] - 0s 10ms/step - loss: 0.1217 - acc: 0.9516 - val_loss: 22.1848 - val_acc: 0.1111
Epoch 148/500
4/4 [==============================] - 0s 11ms/step - loss: 0.1528 - acc: 0.9435 - val_loss: 22.2960 - val_acc: 0.1111
Epoch 149/500
4/4 [==============================] - 0s 9ms/step - loss: 0.1503 - acc: 0.9355 - val_loss: 22.7332 - val_acc: 0.1111
Epoch 150/500
4/4 [==============================] - 0s 9ms/step - loss: 0.1553 - acc: 0.9274 - val_loss: 22.0712 - val_acc: 0.1111
Epoch 151/500
4/4 [==============================] - 0s 10ms/step - loss: 0.2565 - acc: 0.8790 - val_loss: 23.3065 - val_acc: 0.1111
Epoch 152/500
4/4 [==============================] - 0s 9ms/step - loss: 0.3083 - acc: 0.8871 - val_loss: 22.0743 - val_acc: 0.0926
Epoch 153/500
4/4 [==============================] - 0s 10ms/step - loss: 0.3026 - acc: 0.8790 - val_loss: 23.5144 - val_acc: 0.1111
Epoch 154/500
4/4 [==============================] - 0s 12ms/step - loss: 0.2379 - acc: 0.9274 - val_loss: 22.2403 - val_acc: 0.1111
Epoch 155/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1605 - acc: 0.9274 - val_loss: 22.7736 - val_acc: 0.1111
Epoch 156/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1413 - acc: 0.9516 - val_loss: 22.6307 - val_acc: 0.1111
Epoch 157/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1474 - acc: 0.9355 - val_loss: 22.4105 - val_acc: 0.1111
Epoch 158/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1992 - acc: 0.9113 - val_loss: 22.9889 - val_acc: 0.1111
Epoch 159/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1656 - acc: 0.9435 - val_loss: 22.6298 - val_acc: 0.1111
Epoch 160/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1253 - acc: 0.9597 - val_loss: 22.4341 - val_acc: 0.1111
Epoch 161/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1335 - acc: 0.9355 - val_loss: 23.0030 - val_acc: 0.1111
Epoch 162/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1517 - acc: 0.9355 - val_loss: 22.2569 - val_acc: 0.1111
Epoch 163/500
4/4 [==============================] - 0s 14ms/step - loss: 0.2304 - acc: 0.8952 - val_loss: 23.4000 - val_acc: 0.1111
Epoch 164/500
4/4 [==============================] - 0s 11ms/step - loss: 0.1990 - acc: 0.9274 - val_loss: 22.2835 - val_acc: 0.1111
Epoch 165/500
4/4 [==============================] - 0s 15ms/step - loss: 0.2109 - acc: 0.9113 - val_loss: 23.1646 - val_acc: 0.1111
Epoch 166/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1927 - acc: 0.9113 - val_loss: 22.5001 - val_acc: 0.1111
Epoch 167/500
4/4 [==============================] - 0s 13ms/step - loss: 0.2279 - acc: 0.8952 - val_loss: 23.2451 - val_acc: 0.1111
Epoch 168/500
4/4 [==============================] - 0s 13ms/step - loss: 0.3415 - acc: 0.8871 - val_loss: 22.2797 - val_acc: 0.0926
Epoch 169/500
4/4 [==============================] - 0s 12ms/step - loss: 0.3525 - acc: 0.8629 - val_loss: 23.4190 - val_acc: 0.1111
Epoch 170/500
4/4 [==============================] - 0s 16ms/step - loss: 0.2217 - acc: 0.9113 - val_loss: 22.4278 - val_acc: 0.1111
Epoch 171/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1649 - acc: 0.9113 - val_loss: 23.2902 - val_acc: 0.1111
Epoch 172/500
4/4 [==============================] - 0s 14ms/step - loss: 0.2113 - acc: 0.9274 - val_loss: 22.6000 - val_acc: 0.1111
Epoch 173/500
4/4 [==============================] - 0s 18ms/step - loss: 0.1438 - acc: 0.9516 - val_loss: 22.9299 - val_acc: 0.1111
Epoch 174/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1395 - acc: 0.9274 - val_loss: 22.4793 - val_acc: 0.1111
Epoch 175/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1496 - acc: 0.9194 - val_loss: 23.6067 - val_acc: 0.1111
Epoch 176/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1984 - acc: 0.9355 - val_loss: 22.4492 - val_acc: 0.1111
Epoch 177/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1912 - acc: 0.9032 - val_loss: 23.1109 - val_acc: 0.1111
Epoch 178/500
4/4 [==============================] - 0s 13ms/step - loss: 0.2017 - acc: 0.9194 - val_loss: 22.4997 - val_acc: 0.1111
Epoch 179/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1545 - acc: 0.9194 - val_loss: 23.0039 - val_acc: 0.1111
Epoch 180/500
4/4 [==============================] - 0s 11ms/step - loss: 0.1356 - acc: 0.9677 - val_loss: 22.7180 - val_acc: 0.1111
Epoch 181/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1522 - acc: 0.9435 - val_loss: 22.8532 - val_acc: 0.1111
Epoch 182/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1458 - acc: 0.9435 - val_loss: 22.5491 - val_acc: 0.1111
Epoch 183/500
4/4 [==============================] - 0s 15ms/step - loss: 0.2183 - acc: 0.9113 - val_loss: 23.1870 - val_acc: 0.1111
Epoch 184/500
4/4 [==============================] - 0s 16ms/step - loss: 0.1743 - acc: 0.9194 - val_loss: 22.5879 - val_acc: 0.1111
Epoch 185/500
4/4 [==============================] - 0s 12ms/step - loss: 0.2094 - acc: 0.9113 - val_loss: 23.2739 - val_acc: 0.1111
Epoch 186/500
4/4 [==============================] - 0s 12ms/step - loss: 0.2012 - acc: 0.9194 - val_loss: 22.5629 - val_acc: 0.1111
Epoch 187/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1318 - acc: 0.9355 - val_loss: 22.9632 - val_acc: 0.1111
Epoch 188/500
4/4 [==============================] - 0s 18ms/step - loss: 0.1200 - acc: 0.9597 - val_loss: 22.6164 - val_acc: 0.1111
Epoch 189/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1812 - acc: 0.9274 - val_loss: 23.3753 - val_acc: 0.1111
Epoch 190/500
4/4 [==============================] - 0s 16ms/step - loss: 0.1580 - acc: 0.9435 - val_loss: 22.4254 - val_acc: 0.1111
Epoch 191/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1680 - acc: 0.9516 - val_loss: 23.2273 - val_acc: 0.1111
Epoch 192/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1692 - acc: 0.9113 - val_loss: 22.7857 - val_acc: 0.1111
Epoch 193/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1325 - acc: 0.9435 - val_loss: 22.7859 - val_acc: 0.1111
Epoch 194/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1194 - acc: 0.9516 - val_loss: 22.6508 - val_acc: 0.1111
Epoch 195/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1279 - acc: 0.9516 - val_loss: 22.6849 - val_acc: 0.1111
Epoch 196/500
4/4 [==============================] - 0s 16ms/step - loss: 0.1207 - acc: 0.9597 - val_loss: 22.7846 - val_acc: 0.1111
Epoch 197/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1118 - acc: 0.9597 - val_loss: 22.6760 - val_acc: 0.1111
Epoch 198/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1139 - acc: 0.9516 - val_loss: 22.8497 - val_acc: 0.1111
Epoch 199/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1493 - acc: 0.9597 - val_loss: 22.6349 - val_acc: 0.1111
Epoch 200/500
4/4 [==============================] - 0s 14ms/step - loss: 0.2225 - acc: 0.9113 - val_loss: 23.2259 - val_acc: 0.1111
Epoch 201/500
4/4 [==============================] - 0s 13ms/step - loss: 0.2458 - acc: 0.9194 - val_loss: 22.5024 - val_acc: 0.1111
Epoch 202/500
4/4 [==============================] - 0s 12ms/step - loss: 0.2163 - acc: 0.9032 - val_loss: 23.5023 - val_acc: 0.1111
Epoch 203/500
4/4 [==============================] - 0s 15ms/step - loss: 0.2484 - acc: 0.8952 - val_loss: 22.5074 - val_acc: 0.1111
Epoch 204/500
4/4 [==============================] - 0s 13ms/step - loss: 0.3535 - acc: 0.8871 - val_loss: 23.7548 - val_acc: 0.1111
Epoch 205/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1661 - acc: 0.9516 - val_loss: 22.6044 - val_acc: 0.1111
Epoch 206/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1773 - acc: 0.9355 - val_loss: 23.3055 - val_acc: 0.1111
Epoch 207/500
4/4 [==============================] - 0s 11ms/step - loss: 0.1315 - acc: 0.9435 - val_loss: 22.8491 - val_acc: 0.1111
Epoch 208/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1289 - acc: 0.9516 - val_loss: 23.3147 - val_acc: 0.1111
Epoch 209/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1425 - acc: 0.9516 - val_loss: 22.8767 - val_acc: 0.1111
Epoch 210/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1427 - acc: 0.9435 - val_loss: 22.9283 - val_acc: 0.1111
Epoch 211/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1185 - acc: 0.9516 - val_loss: 22.8638 - val_acc: 0.1111
Epoch 212/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1022 - acc: 0.9597 - val_loss: 23.1521 - val_acc: 0.1111
Epoch 213/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1249 - acc: 0.9597 - val_loss: 22.6956 - val_acc: 0.1111
Epoch 214/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1908 - acc: 0.9194 - val_loss: 23.2083 - val_acc: 0.1111
Epoch 215/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1476 - acc: 0.9435 - val_loss: 23.1025 - val_acc: 0.1111
Epoch 216/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1359 - acc: 0.9597 - val_loss: 22.8059 - val_acc: 0.1111
Epoch 217/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1427 - acc: 0.9516 - val_loss: 23.1882 - val_acc: 0.1111
Epoch 218/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1158 - acc: 0.9516 - val_loss: 22.7635 - val_acc: 0.1111
Epoch 219/500
4/4 [==============================] - 0s 19ms/step - loss: 0.2279 - acc: 0.9274 - val_loss: 23.4922 - val_acc: 0.1111
Epoch 220/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1392 - acc: 0.9435 - val_loss: 22.8583 - val_acc: 0.1111
Epoch 221/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1048 - acc: 0.9677 - val_loss: 23.3020 - val_acc: 0.1111
Epoch 222/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1117 - acc: 0.9597 - val_loss: 22.7855 - val_acc: 0.1111
Epoch 223/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1322 - acc: 0.9435 - val_loss: 23.2608 - val_acc: 0.1111
Epoch 224/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1082 - acc: 0.9758 - val_loss: 22.8300 - val_acc: 0.1111
Epoch 225/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1507 - acc: 0.9274 - val_loss: 23.1843 - val_acc: 0.1111
Epoch 226/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1183 - acc: 0.9597 - val_loss: 23.1208 - val_acc: 0.1111
Epoch 227/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1323 - acc: 0.9516 - val_loss: 22.8034 - val_acc: 0.1111
Epoch 228/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1603 - acc: 0.9194 - val_loss: 23.7321 - val_acc: 0.1111
Epoch 229/500
4/4 [==============================] - 0s 13ms/step - loss: 0.2189 - acc: 0.9274 - val_loss: 22.7453 - val_acc: 0.1111
Epoch 230/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1775 - acc: 0.9194 - val_loss: 24.1227 - val_acc: 0.1111
Epoch 231/500
4/4 [==============================] - 0s 13ms/step - loss: 0.2208 - acc: 0.9113 - val_loss: 22.7973 - val_acc: 0.1111
Epoch 232/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1985 - acc: 0.9194 - val_loss: 23.6276 - val_acc: 0.1111
Epoch 233/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1823 - acc: 0.9274 - val_loss: 22.9089 - val_acc: 0.1111
Epoch 234/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1924 - acc: 0.8871 - val_loss: 23.3485 - val_acc: 0.1111
Epoch 235/500
4/4 [==============================] - 0s 11ms/step - loss: 0.1371 - acc: 0.9597 - val_loss: 23.0399 - val_acc: 0.1111
Epoch 236/500
4/4 [==============================] - 0s 13ms/step - loss: 0.0942 - acc: 0.9677 - val_loss: 23.6487 - val_acc: 0.1111
Epoch 237/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1251 - acc: 0.9597 - val_loss: 23.0142 - val_acc: 0.1111
Epoch 238/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1210 - acc: 0.9516 - val_loss: 23.5063 - val_acc: 0.1111
Epoch 239/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1239 - acc: 0.9597 - val_loss: 23.0078 - val_acc: 0.1111
Epoch 240/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1644 - acc: 0.9435 - val_loss: 23.3756 - val_acc: 0.1111
Epoch 241/500
4/4 [==============================] - 0s 16ms/step - loss: 0.1056 - acc: 0.9597 - val_loss: 23.1459 - val_acc: 0.1111
Epoch 242/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1118 - acc: 0.9597 - val_loss: 23.3176 - val_acc: 0.1111
Epoch 243/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1057 - acc: 0.9677 - val_loss: 23.1028 - val_acc: 0.1111
Epoch 244/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1417 - acc: 0.9435 - val_loss: 23.3592 - val_acc: 0.1111
Epoch 245/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1509 - acc: 0.9355 - val_loss: 23.4411 - val_acc: 0.1111
Epoch 246/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1591 - acc: 0.9435 - val_loss: 23.1200 - val_acc: 0.1111
Epoch 247/500
4/4 [==============================] - 0s 16ms/step - loss: 0.1720 - acc: 0.9274 - val_loss: 23.8807 - val_acc: 0.1111
Epoch 248/500
4/4 [==============================] - 0s 13ms/step - loss: 0.2047 - acc: 0.9194 - val_loss: 23.0054 - val_acc: 0.1111
Epoch 249/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1207 - acc: 0.9435 - val_loss: 24.2574 - val_acc: 0.1111
Epoch 250/500
4/4 [==============================] - 0s 14ms/step - loss: 0.2305 - acc: 0.9113 - val_loss: 22.9995 - val_acc: 0.1111
Epoch 251/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1628 - acc: 0.9355 - val_loss: 23.7382 - val_acc: 0.1111
Epoch 252/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1862 - acc: 0.9194 - val_loss: 23.5683 - val_acc: 0.1111
Epoch 253/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1351 - acc: 0.9516 - val_loss: 23.3187 - val_acc: 0.1111
Epoch 254/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1037 - acc: 0.9597 - val_loss: 23.4282 - val_acc: 0.1111
Epoch 255/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1342 - acc: 0.9597 - val_loss: 23.1191 - val_acc: 0.1111
Epoch 256/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1418 - acc: 0.9355 - val_loss: 23.6649 - val_acc: 0.1111
Epoch 257/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1271 - acc: 0.9516 - val_loss: 23.2220 - val_acc: 0.1111
Epoch 258/500
4/4 [==============================] - 0s 19ms/step - loss: 0.1213 - acc: 0.9758 - val_loss: 23.4739 - val_acc: 0.1111
Epoch 259/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1337 - acc: 0.9516 - val_loss: 23.4479 - val_acc: 0.1111
Epoch 260/500
4/4 [==============================] - 0s 17ms/step - loss: 0.1251 - acc: 0.9677 - val_loss: 23.3798 - val_acc: 0.1111
Epoch 261/500
4/4 [==============================] - 0s 16ms/step - loss: 0.1008 - acc: 0.9597 - val_loss: 23.3336 - val_acc: 0.1111
Epoch 262/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1073 - acc: 0.9597 - val_loss: 23.5531 - val_acc: 0.1111
Epoch 263/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1057 - acc: 0.9758 - val_loss: 23.3584 - val_acc: 0.1111
Epoch 264/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1072 - acc: 0.9597 - val_loss: 23.4649 - val_acc: 0.1111
Epoch 265/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1189 - acc: 0.9597 - val_loss: 23.1721 - val_acc: 0.1111
Epoch 266/500
4/4 [==============================] - 0s 16ms/step - loss: 0.1496 - acc: 0.9274 - val_loss: 24.0132 - val_acc: 0.1111
Epoch 267/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1573 - acc: 0.9597 - val_loss: 23.2103 - val_acc: 0.1111
Epoch 268/500
4/4 [==============================] - 0s 16ms/step - loss: 0.1166 - acc: 0.9516 - val_loss: 23.4713 - val_acc: 0.1111
Epoch 269/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1008 - acc: 0.9758 - val_loss: 23.4554 - val_acc: 0.1111
Epoch 270/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1098 - acc: 0.9677 - val_loss: 23.3176 - val_acc: 0.1111
Epoch 271/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1338 - acc: 0.9435 - val_loss: 23.7195 - val_acc: 0.1111
Epoch 272/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1398 - acc: 0.9355 - val_loss: 23.6044 - val_acc: 0.1111
Epoch 273/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1238 - acc: 0.9516 - val_loss: 23.3157 - val_acc: 0.1111
Epoch 274/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1191 - acc: 0.9435 - val_loss: 23.6524 - val_acc: 0.1111
Epoch 275/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1013 - acc: 0.9677 - val_loss: 23.2708 - val_acc: 0.1111
Epoch 276/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1109 - acc: 0.9435 - val_loss: 23.7140 - val_acc: 0.1111
Epoch 277/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1137 - acc: 0.9677 - val_loss: 23.2920 - val_acc: 0.1111
Epoch 278/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1511 - acc: 0.9435 - val_loss: 23.8772 - val_acc: 0.1111
Epoch 279/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1322 - acc: 0.9516 - val_loss: 23.3137 - val_acc: 0.1111
Epoch 280/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1191 - acc: 0.9435 - val_loss: 23.5295 - val_acc: 0.1111
Epoch 281/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1131 - acc: 0.9597 - val_loss: 23.4659 - val_acc: 0.1111
Epoch 282/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1031 - acc: 0.9677 - val_loss: 23.5454 - val_acc: 0.1111
Epoch 283/500
4/4 [==============================] - 0s 20ms/step - loss: 0.1107 - acc: 0.9677 - val_loss: 23.5292 - val_acc: 0.1111
Epoch 284/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1156 - acc: 0.9677 - val_loss: 23.3897 - val_acc: 0.1111
Epoch 285/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1468 - acc: 0.9435 - val_loss: 23.8304 - val_acc: 0.1111
Epoch 286/500
4/4 [==============================] - 0s 14ms/step - loss: 0.0979 - acc: 0.9516 - val_loss: 23.3852 - val_acc: 0.1111
Epoch 287/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1598 - acc: 0.9435 - val_loss: 23.5881 - val_acc: 0.1111
Epoch 288/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1364 - acc: 0.9194 - val_loss: 23.8109 - val_acc: 0.1111
Epoch 289/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1498 - acc: 0.9516 - val_loss: 23.3443 - val_acc: 0.1111
Epoch 290/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1234 - acc: 0.9355 - val_loss: 24.3042 - val_acc: 0.1111
Epoch 291/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1712 - acc: 0.9597 - val_loss: 23.3505 - val_acc: 0.1111
Epoch 292/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1456 - acc: 0.9113 - val_loss: 25.2982 - val_acc: 0.1111
Epoch 293/500
4/4 [==============================] - 0s 13ms/step - loss: 0.3571 - acc: 0.8548 - val_loss: 23.5393 - val_acc: 0.0926
Epoch 294/500
4/4 [==============================] - 0s 14ms/step - loss: 0.3942 - acc: 0.8629 - val_loss: 26.2457 - val_acc: 0.1111
Epoch 295/500
4/4 [==============================] - 0s 16ms/step - loss: 0.4796 - acc: 0.8952 - val_loss: 23.4475 - val_acc: 0.0926
Epoch 296/500
4/4 [==============================] - 0s 15ms/step - loss: 0.4293 - acc: 0.8629 - val_loss: 24.3166 - val_acc: 0.1111
Epoch 297/500
4/4 [==============================] - 0s 17ms/step - loss: 0.4386 - acc: 0.8790 - val_loss: 23.9550 - val_acc: 0.1111
Epoch 298/500
4/4 [==============================] - 0s 13ms/step - loss: 0.2617 - acc: 0.9194 - val_loss: 23.9968 - val_acc: 0.1111
Epoch 299/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1356 - acc: 0.9274 - val_loss: 23.6447 - val_acc: 0.1111
Epoch 300/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1256 - acc: 0.9355 - val_loss: 24.7169 - val_acc: 0.1111
Epoch 301/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1590 - acc: 0.9516 - val_loss: 23.6936 - val_acc: 0.1111
Epoch 302/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1387 - acc: 0.9435 - val_loss: 24.1227 - val_acc: 0.1111
Epoch 303/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1511 - acc: 0.9435 - val_loss: 23.8965 - val_acc: 0.1111
Epoch 304/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1406 - acc: 0.9597 - val_loss: 24.0299 - val_acc: 0.1111
Epoch 305/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1177 - acc: 0.9677 - val_loss: 23.7355 - val_acc: 0.1111
Epoch 306/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1259 - acc: 0.9516 - val_loss: 23.7459 - val_acc: 0.1111
Epoch 307/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1216 - acc: 0.9597 - val_loss: 23.7725 - val_acc: 0.1111
Epoch 308/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1292 - acc: 0.9597 - val_loss: 23.8597 - val_acc: 0.1111
Epoch 309/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1100 - acc: 0.9597 - val_loss: 23.6322 - val_acc: 0.1111
Epoch 310/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1115 - acc: 0.9597 - val_loss: 23.6761 - val_acc: 0.1111
Epoch 311/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1299 - acc: 0.9516 - val_loss: 23.4234 - val_acc: 0.1111
Epoch 312/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1386 - acc: 0.9516 - val_loss: 23.6813 - val_acc: 0.1111
Epoch 313/500
4/4 [==============================] - 0s 16ms/step - loss: 0.1265 - acc: 0.9516 - val_loss: 23.5849 - val_acc: 0.1111
Epoch 314/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1002 - acc: 0.9677 - val_loss: 23.6324 - val_acc: 0.1111
Epoch 315/500
4/4 [==============================] - 0s 17ms/step - loss: 0.1146 - acc: 0.9597 - val_loss: 23.5858 - val_acc: 0.1111
Epoch 316/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1062 - acc: 0.9758 - val_loss: 23.3732 - val_acc: 0.1111
Epoch 317/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1121 - acc: 0.9516 - val_loss: 23.5813 - val_acc: 0.1111
Epoch 318/500
4/4 [==============================] - 0s 17ms/step - loss: 0.1126 - acc: 0.9758 - val_loss: 23.5084 - val_acc: 0.1111
Epoch 319/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1295 - acc: 0.9516 - val_loss: 23.8757 - val_acc: 0.1111
Epoch 320/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1829 - acc: 0.9274 - val_loss: 23.2878 - val_acc: 0.1111
Epoch 321/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1144 - acc: 0.9677 - val_loss: 23.4708 - val_acc: 0.1111
Epoch 322/500
4/4 [==============================] - 0s 14ms/step - loss: 0.0987 - acc: 0.9677 - val_loss: 23.3341 - val_acc: 0.1111
Epoch 323/500
4/4 [==============================] - 0s 15ms/step - loss: 0.0957 - acc: 0.9677 - val_loss: 23.5208 - val_acc: 0.1111
Epoch 324/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1140 - acc: 0.9758 - val_loss: 23.5207 - val_acc: 0.1111
Epoch 325/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1070 - acc: 0.9677 - val_loss: 23.1933 - val_acc: 0.1111
Epoch 326/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1291 - acc: 0.9516 - val_loss: 23.3896 - val_acc: 0.1111
Epoch 327/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1332 - acc: 0.9516 - val_loss: 23.5311 - val_acc: 0.1111
Epoch 328/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1431 - acc: 0.9516 - val_loss: 23.1808 - val_acc: 0.1111
Epoch 329/500
4/4 [==============================] - 0s 13ms/step - loss: 0.2193 - acc: 0.8952 - val_loss: 24.3842 - val_acc: 0.1111
Epoch 330/500
4/4 [==============================] - 0s 15ms/step - loss: 0.4301 - acc: 0.8710 - val_loss: 23.2780 - val_acc: 0.0926
Epoch 331/500
4/4 [==============================] - 0s 15ms/step - loss: 0.6971 - acc: 0.8468 - val_loss: 24.3606 - val_acc: 0.1111
Epoch 332/500
4/4 [==============================] - 0s 16ms/step - loss: 1.0037 - acc: 0.8306 - val_loss: 23.6043 - val_acc: 0.1111
Epoch 333/500
4/4 [==============================] - 0s 19ms/step - loss: 0.6777 - acc: 0.8629 - val_loss: 23.6994 - val_acc: 0.1111
Epoch 334/500
4/4 [==============================] - 0s 15ms/step - loss: 0.5580 - acc: 0.8306 - val_loss: 24.1181 - val_acc: 0.1111
Epoch 335/500
4/4 [==============================] - 0s 13ms/step - loss: 0.3854 - acc: 0.8871 - val_loss: 22.9688 - val_acc: 0.0926
Epoch 336/500
4/4 [==============================] - 0s 13ms/step - loss: 0.6197 - acc: 0.8548 - val_loss: 24.2677 - val_acc: 0.1111
Epoch 337/500
4/4 [==============================] - 0s 14ms/step - loss: 0.2698 - acc: 0.9032 - val_loss: 23.1685 - val_acc: 0.1111
Epoch 338/500
4/4 [==============================] - 0s 18ms/step - loss: 0.2653 - acc: 0.8871 - val_loss: 23.7592 - val_acc: 0.1111
Epoch 339/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1443 - acc: 0.9677 - val_loss: 23.5637 - val_acc: 0.1111
Epoch 340/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1139 - acc: 0.9516 - val_loss: 23.2435 - val_acc: 0.1111
Epoch 341/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1421 - acc: 0.9435 - val_loss: 23.3380 - val_acc: 0.1111
Epoch 342/500
4/4 [==============================] - 0s 16ms/step - loss: 0.1503 - acc: 0.9435 - val_loss: 23.5158 - val_acc: 0.1111
Epoch 343/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1246 - acc: 0.9516 - val_loss: 23.1765 - val_acc: 0.1111
Epoch 344/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1264 - acc: 0.9597 - val_loss: 23.0789 - val_acc: 0.1111
Epoch 345/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1226 - acc: 0.9597 - val_loss: 23.2588 - val_acc: 0.1111
Epoch 346/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1090 - acc: 0.9597 - val_loss: 23.2376 - val_acc: 0.1111
Epoch 347/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1079 - acc: 0.9677 - val_loss: 23.1331 - val_acc: 0.1111
Epoch 348/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1136 - acc: 0.9516 - val_loss: 23.2845 - val_acc: 0.1111
Epoch 349/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1186 - acc: 0.9597 - val_loss: 23.0903 - val_acc: 0.1111
Epoch 350/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1118 - acc: 0.9516 - val_loss: 23.0281 - val_acc: 0.1111
Epoch 351/500
4/4 [==============================] - 0s 18ms/step - loss: 0.1076 - acc: 0.9758 - val_loss: 22.8920 - val_acc: 0.1111
Epoch 352/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1161 - acc: 0.9597 - val_loss: 22.9751 - val_acc: 0.1111
Epoch 353/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1095 - acc: 0.9677 - val_loss: 22.8231 - val_acc: 0.1111
Epoch 354/500
4/4 [==============================] - 0s 19ms/step - loss: 0.1871 - acc: 0.9032 - val_loss: 23.4554 - val_acc: 0.1111
Epoch 355/500
4/4 [==============================] - 0s 16ms/step - loss: 0.1667 - acc: 0.9435 - val_loss: 22.7717 - val_acc: 0.1111
Epoch 356/500
4/4 [==============================] - 0s 16ms/step - loss: 0.1068 - acc: 0.9435 - val_loss: 22.8119 - val_acc: 0.1111
Epoch 357/500
4/4 [==============================] - 0s 14ms/step - loss: 0.0941 - acc: 0.9758 - val_loss: 23.0294 - val_acc: 0.1111
Epoch 358/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1219 - acc: 0.9597 - val_loss: 23.0359 - val_acc: 0.1111
Epoch 359/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1362 - acc: 0.9597 - val_loss: 22.6603 - val_acc: 0.1111
Epoch 360/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1787 - acc: 0.9274 - val_loss: 23.2020 - val_acc: 0.1111
Epoch 361/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1759 - acc: 0.9435 - val_loss: 22.5643 - val_acc: 0.1111
Epoch 362/500
4/4 [==============================] - 0s 15ms/step - loss: 0.2591 - acc: 0.8952 - val_loss: 23.3079 - val_acc: 0.1111
Epoch 363/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1773 - acc: 0.9516 - val_loss: 22.6940 - val_acc: 0.1111
Epoch 364/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1323 - acc: 0.9274 - val_loss: 23.3494 - val_acc: 0.1111
Epoch 365/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1270 - acc: 0.9597 - val_loss: 22.8959 - val_acc: 0.1111
Epoch 366/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1060 - acc: 0.9597 - val_loss: 22.8525 - val_acc: 0.1111
Epoch 367/500
4/4 [==============================] - 0s 17ms/step - loss: 0.0935 - acc: 0.9677 - val_loss: 23.0385 - val_acc: 0.1111
Epoch 368/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1047 - acc: 0.9758 - val_loss: 22.7066 - val_acc: 0.1111
Epoch 369/500
4/4 [==============================] - 0s 17ms/step - loss: 0.1565 - acc: 0.9194 - val_loss: 23.3360 - val_acc: 0.1111
Epoch 370/500
4/4 [==============================] - 0s 14ms/step - loss: 0.2052 - acc: 0.9516 - val_loss: 22.6202 - val_acc: 0.1111
Epoch 371/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1175 - acc: 0.9516 - val_loss: 22.8594 - val_acc: 0.1111
Epoch 372/500
4/4 [==============================] - 0s 14ms/step - loss: 0.0986 - acc: 0.9516 - val_loss: 23.0136 - val_acc: 0.1111
Epoch 373/500
4/4 [==============================] - 0s 16ms/step - loss: 0.1045 - acc: 0.9597 - val_loss: 22.7756 - val_acc: 0.1111
Epoch 374/500
4/4 [==============================] - 0s 17ms/step - loss: 0.1045 - acc: 0.9758 - val_loss: 22.8384 - val_acc: 0.1111
Epoch 375/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1329 - acc: 0.9355 - val_loss: 22.9049 - val_acc: 0.1111
Epoch 376/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1122 - acc: 0.9758 - val_loss: 22.9546 - val_acc: 0.1111
Epoch 377/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1053 - acc: 0.9597 - val_loss: 22.5753 - val_acc: 0.1111
Epoch 378/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1304 - acc: 0.9435 - val_loss: 23.1421 - val_acc: 0.1111
Epoch 379/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1603 - acc: 0.9435 - val_loss: 22.8544 - val_acc: 0.1111
Epoch 380/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1365 - acc: 0.9597 - val_loss: 22.6235 - val_acc: 0.1111
Epoch 381/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1077 - acc: 0.9435 - val_loss: 23.1235 - val_acc: 0.1111
Epoch 382/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1196 - acc: 0.9758 - val_loss: 22.6663 - val_acc: 0.1111
Epoch 383/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1430 - acc: 0.9274 - val_loss: 23.0961 - val_acc: 0.1111
Epoch 384/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1269 - acc: 0.9435 - val_loss: 22.8348 - val_acc: 0.1111
Epoch 385/500
4/4 [==============================] - 0s 18ms/step - loss: 0.1013 - acc: 0.9597 - val_loss: 22.7799 - val_acc: 0.1111
Epoch 386/500
4/4 [==============================] - 0s 15ms/step - loss: 0.0961 - acc: 0.9597 - val_loss: 22.8906 - val_acc: 0.1111
Epoch 387/500
4/4 [==============================] - 0s 20ms/step - loss: 0.1220 - acc: 0.9758 - val_loss: 22.6870 - val_acc: 0.1111
Epoch 388/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1026 - acc: 0.9355 - val_loss: 23.1471 - val_acc: 0.1111
Epoch 389/500
4/4 [==============================] - 0s 16ms/step - loss: 0.1512 - acc: 0.9597 - val_loss: 22.6410 - val_acc: 0.1111
Epoch 390/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1469 - acc: 0.9355 - val_loss: 23.3292 - val_acc: 0.1111
Epoch 391/500
4/4 [==============================] - 0s 15ms/step - loss: 0.0839 - acc: 0.9677 - val_loss: 22.5535 - val_acc: 0.1111
Epoch 392/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1123 - acc: 0.9677 - val_loss: 23.2854 - val_acc: 0.1111
Epoch 393/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1066 - acc: 0.9677 - val_loss: 22.6559 - val_acc: 0.1111
Epoch 394/500
4/4 [==============================] - 0s 16ms/step - loss: 0.0970 - acc: 0.9597 - val_loss: 22.9524 - val_acc: 0.1111
Epoch 395/500
4/4 [==============================] - 0s 16ms/step - loss: 0.1034 - acc: 0.9758 - val_loss: 22.6434 - val_acc: 0.1111
Epoch 396/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1321 - acc: 0.9355 - val_loss: 23.1117 - val_acc: 0.1111
Epoch 397/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1108 - acc: 0.9677 - val_loss: 22.6222 - val_acc: 0.1111
Epoch 398/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1221 - acc: 0.9355 - val_loss: 23.2697 - val_acc: 0.1111
Epoch 399/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1336 - acc: 0.9597 - val_loss: 22.5971 - val_acc: 0.1111
Epoch 400/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1473 - acc: 0.9355 - val_loss: 23.5644 - val_acc: 0.1111
Epoch 401/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1166 - acc: 0.9677 - val_loss: 22.7911 - val_acc: 0.1111
Epoch 402/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1242 - acc: 0.9435 - val_loss: 23.4126 - val_acc: 0.1111
Epoch 403/500
4/4 [==============================] - 0s 20ms/step - loss: 0.1157 - acc: 0.9435 - val_loss: 22.7714 - val_acc: 0.1111
Epoch 404/500
4/4 [==============================] - 0s 16ms/step - loss: 0.1025 - acc: 0.9758 - val_loss: 23.2145 - val_acc: 0.1111
Epoch 405/500
4/4 [==============================] - 0s 15ms/step - loss: 0.0968 - acc: 0.9758 - val_loss: 22.8985 - val_acc: 0.1111
Epoch 406/500
4/4 [==============================] - 0s 13ms/step - loss: 0.0865 - acc: 0.9758 - val_loss: 23.0441 - val_acc: 0.1111
Epoch 407/500
4/4 [==============================] - 0s 13ms/step - loss: 0.0923 - acc: 0.9839 - val_loss: 23.0963 - val_acc: 0.1111
Epoch 408/500
4/4 [==============================] - 0s 15ms/step - loss: 0.0880 - acc: 0.9758 - val_loss: 22.9003 - val_acc: 0.1111
Epoch 409/500
4/4 [==============================] - 0s 16ms/step - loss: 0.0935 - acc: 0.9597 - val_loss: 23.1993 - val_acc: 0.1111
Epoch 410/500
4/4 [==============================] - 0s 16ms/step - loss: 0.1080 - acc: 0.9435 - val_loss: 23.1260 - val_acc: 0.1111
Epoch 411/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1130 - acc: 0.9677 - val_loss: 22.7842 - val_acc: 0.1111
Epoch 412/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1128 - acc: 0.9597 - val_loss: 23.4000 - val_acc: 0.1111
Epoch 413/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1088 - acc: 0.9677 - val_loss: 22.7554 - val_acc: 0.1111
Epoch 414/500
4/4 [==============================] - 0s 16ms/step - loss: 0.1766 - acc: 0.9194 - val_loss: 23.4285 - val_acc: 0.1111
Epoch 415/500
4/4 [==============================] - 0s 18ms/step - loss: 0.1102 - acc: 0.9435 - val_loss: 23.1443 - val_acc: 0.1111
Epoch 416/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1154 - acc: 0.9677 - val_loss: 23.3544 - val_acc: 0.1111
Epoch 417/500
4/4 [==============================] - 0s 15ms/step - loss: 0.0883 - acc: 0.9597 - val_loss: 23.1959 - val_acc: 0.1111
Epoch 418/500
4/4 [==============================] - 0s 14ms/step - loss: 0.0893 - acc: 0.9677 - val_loss: 23.6052 - val_acc: 0.1111
Epoch 419/500
4/4 [==============================] - 0s 14ms/step - loss: 0.0845 - acc: 0.9839 - val_loss: 23.1814 - val_acc: 0.1111
Epoch 420/500
4/4 [==============================] - 0s 15ms/step - loss: 0.0926 - acc: 0.9677 - val_loss: 23.5272 - val_acc: 0.1111
Epoch 421/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1336 - acc: 0.9597 - val_loss: 22.8701 - val_acc: 0.1111
Epoch 422/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1030 - acc: 0.9435 - val_loss: 23.4018 - val_acc: 0.1111
Epoch 423/500
4/4 [==============================] - 0s 19ms/step - loss: 0.1165 - acc: 0.9677 - val_loss: 22.8822 - val_acc: 0.1111
Epoch 424/500
4/4 [==============================] - 0s 14ms/step - loss: 0.0996 - acc: 0.9355 - val_loss: 23.6344 - val_acc: 0.1111
Epoch 425/500
4/4 [==============================] - 0s 14ms/step - loss: 0.0990 - acc: 0.9758 - val_loss: 22.9401 - val_acc: 0.1111
Epoch 426/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1426 - acc: 0.9355 - val_loss: 23.9520 - val_acc: 0.1111
Epoch 427/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1292 - acc: 0.9435 - val_loss: 23.1695 - val_acc: 0.1111
Epoch 428/500
4/4 [==============================] - 0s 20ms/step - loss: 0.1820 - acc: 0.9355 - val_loss: 23.3348 - val_acc: 0.1111
Epoch 429/500
4/4 [==============================] - 0s 17ms/step - loss: 0.1555 - acc: 0.9194 - val_loss: 23.3582 - val_acc: 0.1111
Epoch 430/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1807 - acc: 0.9355 - val_loss: 22.8922 - val_acc: 0.1111
Epoch 431/500
4/4 [==============================] - 0s 16ms/step - loss: 0.1794 - acc: 0.9194 - val_loss: 23.9570 - val_acc: 0.1111
Epoch 432/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1294 - acc: 0.9597 - val_loss: 22.9627 - val_acc: 0.1111
Epoch 433/500
4/4 [==============================] - 0s 17ms/step - loss: 0.1563 - acc: 0.9194 - val_loss: 24.6522 - val_acc: 0.1111
Epoch 434/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1654 - acc: 0.9274 - val_loss: 22.8452 - val_acc: 0.0926
Epoch 435/500
4/4 [==============================] - 0s 15ms/step - loss: 0.2285 - acc: 0.9032 - val_loss: 24.8117 - val_acc: 0.1111
Epoch 436/500
4/4 [==============================] - 0s 14ms/step - loss: 0.2553 - acc: 0.9194 - val_loss: 22.9794 - val_acc: 0.1111
Epoch 437/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1673 - acc: 0.9516 - val_loss: 23.8507 - val_acc: 0.1111
Epoch 438/500
4/4 [==============================] - 0s 16ms/step - loss: 0.1522 - acc: 0.9435 - val_loss: 23.3858 - val_acc: 0.1111
Epoch 439/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1162 - acc: 0.9677 - val_loss: 23.7112 - val_acc: 0.1111
Epoch 440/500
4/4 [==============================] - 0s 15ms/step - loss: 0.0613 - acc: 0.9839 - val_loss: 22.8955 - val_acc: 0.1111
Epoch 441/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1986 - acc: 0.9194 - val_loss: 24.0132 - val_acc: 0.1111
Epoch 442/500
4/4 [==============================] - 0s 16ms/step - loss: 0.1002 - acc: 0.9677 - val_loss: 23.0541 - val_acc: 0.1111
Epoch 443/500
4/4 [==============================] - 0s 16ms/step - loss: 0.1226 - acc: 0.9516 - val_loss: 24.0349 - val_acc: 0.1111
Epoch 444/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1380 - acc: 0.9516 - val_loss: 23.1683 - val_acc: 0.1111
Epoch 445/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1296 - acc: 0.9677 - val_loss: 23.4324 - val_acc: 0.1111
Epoch 446/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1145 - acc: 0.9355 - val_loss: 23.3625 - val_acc: 0.1111
Epoch 447/500
4/4 [==============================] - 0s 20ms/step - loss: 0.0952 - acc: 0.9758 - val_loss: 23.1775 - val_acc: 0.1111
Epoch 448/500
4/4 [==============================] - 0s 16ms/step - loss: 0.1233 - acc: 0.9355 - val_loss: 23.6297 - val_acc: 0.1111
Epoch 449/500
4/4 [==============================] - 0s 15ms/step - loss: 0.0994 - acc: 0.9839 - val_loss: 22.9401 - val_acc: 0.1111
Epoch 450/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1284 - acc: 0.9435 - val_loss: 23.6238 - val_acc: 0.1111
Epoch 451/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1732 - acc: 0.9355 - val_loss: 23.3320 - val_acc: 0.1111
Epoch 452/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1135 - acc: 0.9597 - val_loss: 23.2497 - val_acc: 0.1111
Epoch 453/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1197 - acc: 0.9516 - val_loss: 23.5210 - val_acc: 0.1111
Epoch 454/500
4/4 [==============================] - 0s 15ms/step - loss: 0.0908 - acc: 0.9758 - val_loss: 23.0503 - val_acc: 0.1111
Epoch 455/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1057 - acc: 0.9355 - val_loss: 23.8890 - val_acc: 0.1111
Epoch 456/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1478 - acc: 0.9274 - val_loss: 22.7271 - val_acc: 0.1111
Epoch 457/500
4/4 [==============================] - 0s 18ms/step - loss: 0.1430 - acc: 0.9355 - val_loss: 23.8116 - val_acc: 0.1111
Epoch 458/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1397 - acc: 0.9597 - val_loss: 22.7251 - val_acc: 0.1111
Epoch 459/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1309 - acc: 0.9516 - val_loss: 23.5237 - val_acc: 0.1111
Epoch 460/500
4/4 [==============================] - 0s 14ms/step - loss: 0.0873 - acc: 0.9677 - val_loss: 22.7880 - val_acc: 0.1111
Epoch 461/500
4/4 [==============================] - 0s 17ms/step - loss: 0.1045 - acc: 0.9516 - val_loss: 24.2939 - val_acc: 0.1111
Epoch 462/500
4/4 [==============================] - 0s 18ms/step - loss: 0.1805 - acc: 0.9274 - val_loss: 22.8714 - val_acc: 0.1111
Epoch 463/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1306 - acc: 0.9435 - val_loss: 23.8546 - val_acc: 0.1111
Epoch 464/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1184 - acc: 0.9677 - val_loss: 22.9326 - val_acc: 0.1111
Epoch 465/500
4/4 [==============================] - 0s 18ms/step - loss: 0.1473 - acc: 0.9516 - val_loss: 23.2812 - val_acc: 0.1111
Epoch 466/500
4/4 [==============================] - 0s 18ms/step - loss: 0.0904 - acc: 0.9516 - val_loss: 22.7908 - val_acc: 0.1111
Epoch 467/500
4/4 [==============================] - 0s 14ms/step - loss: 0.2752 - acc: 0.8871 - val_loss: 22.7266 - val_acc: 0.1111
Epoch 468/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1516 - acc: 0.9274 - val_loss: 23.0679 - val_acc: 0.1111
Epoch 469/500
4/4 [==============================] - 0s 14ms/step - loss: 0.2015 - acc: 0.9435 - val_loss: 22.7514 - val_acc: 0.1111
Epoch 470/500
4/4 [==============================] - 0s 17ms/step - loss: 0.2574 - acc: 0.8871 - val_loss: 23.9754 - val_acc: 0.1111
Epoch 471/500
4/4 [==============================] - 0s 16ms/step - loss: 0.1217 - acc: 0.9677 - val_loss: 22.9429 - val_acc: 0.1111
Epoch 472/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1507 - acc: 0.9355 - val_loss: 24.5729 - val_acc: 0.1111
Epoch 473/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1513 - acc: 0.9355 - val_loss: 22.8511 - val_acc: 0.1111
Epoch 474/500
4/4 [==============================] - 0s 16ms/step - loss: 0.1190 - acc: 0.9435 - val_loss: 24.1421 - val_acc: 0.1111
Epoch 475/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1284 - acc: 0.9677 - val_loss: 22.7238 - val_acc: 0.1111
Epoch 476/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1671 - acc: 0.9274 - val_loss: 24.0004 - val_acc: 0.1111
Epoch 477/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1287 - acc: 0.9597 - val_loss: 23.0066 - val_acc: 0.1111
Epoch 478/500
4/4 [==============================] - 0s 16ms/step - loss: 0.1525 - acc: 0.9516 - val_loss: 23.4313 - val_acc: 0.1111
Epoch 479/500
4/4 [==============================] - 0s 16ms/step - loss: 0.1107 - acc: 0.9516 - val_loss: 22.9600 - val_acc: 0.1111
Epoch 480/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1025 - acc: 0.9677 - val_loss: 23.4529 - val_acc: 0.1111
Epoch 481/500
4/4 [==============================] - 0s 15ms/step - loss: 0.1056 - acc: 0.9597 - val_loss: 23.0562 - val_acc: 0.1111
Epoch 482/500
4/4 [==============================] - 0s 14ms/step - loss: 0.1335 - acc: 0.9435 - val_loss: 23.1963 - val_acc: 0.1111
Epoch 483/500
4/4 [==============================] - 0s 13ms/step - loss: 0.0949 - acc: 0.9516 - val_loss: 23.0667 - val_acc: 0.1111
Epoch 484/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1045 - acc: 0.9677 - val_loss: 23.1125 - val_acc: 0.1111
Epoch 485/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1124 - acc: 0.9516 - val_loss: 23.3626 - val_acc: 0.1111
Epoch 486/500
4/4 [==============================] - 0s 11ms/step - loss: 0.1123 - acc: 0.9677 - val_loss: 23.1211 - val_acc: 0.1111
Epoch 487/500
4/4 [==============================] - 0s 11ms/step - loss: 0.1261 - acc: 0.9758 - val_loss: 22.9302 - val_acc: 0.1111
Epoch 488/500
4/4 [==============================] - 0s 12ms/step - loss: 0.0967 - acc: 0.9516 - val_loss: 23.2112 - val_acc: 0.1111
Epoch 489/500
4/4 [==============================] - 0s 10ms/step - loss: 0.1019 - acc: 0.9516 - val_loss: 23.1375 - val_acc: 0.1111
Epoch 490/500
4/4 [==============================] - 0s 11ms/step - loss: 0.1025 - acc: 0.9758 - val_loss: 22.8641 - val_acc: 0.1111
Epoch 491/500
4/4 [==============================] - 0s 13ms/step - loss: 0.1538 - acc: 0.9516 - val_loss: 23.2745 - val_acc: 0.1111
Epoch 492/500
4/4 [==============================] - 0s 12ms/step - loss: 0.0703 - acc: 0.9839 - val_loss: 22.7327 - val_acc: 0.1111
Epoch 493/500
4/4 [==============================] - 0s 12ms/step - loss: 0.1116 - acc: 0.9516 - val_loss: 23.2708 - val_acc: 0.1111
Epoch 494/500
4/4 [==============================] - 0s 12ms/step - loss: 0.0927 - acc: 0.9758 - val_loss: 22.8425 - val_acc: 0.1111
Epoch 495/500
4/4 [==============================] - 0s 11ms/step - loss: 0.0986 - acc: 0.9597 - val_loss: 23.1908 - val_acc: 0.1111
Epoch 496/500
4/4 [==============================] - 0s 10ms/step - loss: 0.1041 - acc: 0.9435 - val_loss: 23.2221 - val_acc: 0.1111
Epoch 497/500
4/4 [==============================] - 0s 11ms/step - loss: 0.0922 - acc: 0.9839 - val_loss: 23.1020 - val_acc: 0.1111
Epoch 498/500
4/4 [==============================] - 0s 13ms/step - loss: 0.0914 - acc: 0.9758 - val_loss: 23.0827 - val_acc: 0.1111
Epoch 499/500
4/4 [==============================] - 0s 19ms/step - loss: 0.0845 - acc: 0.9758 - val_loss: 22.8367 - val_acc: 0.1111
Epoch 500/500
4/4 [==============================] - 0s 12ms/step - loss: 0.0900 - acc: 0.9677 - val_loss: 22.9039 - val_acc: 0.1111
###Markdown
evaluation
###Code
model.evaluate(x_data, y_data)
###Output
6/6 [==============================] - 0s 2ms/step - loss: 7.0011 - acc: 0.7191
###Markdown
* dense : 64, dense1 : 24, epochs : 50 --> loss: 8.3743 - acc: 0.6629 [8.374309539794922, 0.6629213690757751]* dense : 64, dense1 : 24, epochs : 100 --> loss: 7.7834 - acc: 0.6910 [7.783389091491699, 0.6910112500190735] val_loss: 25.3287 - val_acc: 0.1111* dense : 48, dense1 : 24, epochs : 500 --> loss: 22.7629 - acc: 0.7079 [22.76287841796875, 0.7078651785850525] acc: 0.9677 - val_loss: 74.8429 - val_acc: 0.1111* dense : 64, dense1 : 48, epochs : 500 --> loss: 11.5153 - acc: 0.7191 [11.515317916870117, 0.7191011309623718] acc: 0.9516 - val_loss: 37.7901 - val_acc: 0.1111* dense : 64, dense1 : 48, epochs : 1000 --> loss: 13.2306 - acc: 0.7135 [13.230615615844727, 0.7134831547737122] acc: 0.9839 - val_loss: 43.4997 - val_acc: 0.1111* dense : 64, dense1 : 36, epochs : 500 --> loss: 7.0011 - acc: 0.7191 [7.001112461090088, 0.7191011309623718] acc: 0.9677 - val_loss: 22.9039 - val_acc: 0.1111* dense : 64, dense1 : 24, epochs : 500 --> loss: 6.9521 - acc: 0.7191 [6.952107906341553, 0.7191011309623718] val_loss: 22.7861 - val_acc: 0.1111* dense : 64, dense1 : 24, dense2 : 64 epochs : 100 --> loss: 2.0083 - acc: 0.7022 [2.008345365524292, 0.7022472023963928] acc: 0.9839 - val_loss: 6.3024 - val_acc: 0.* dense : 64, dense1 : 24, dense2 : 64 dense3 : 24 epochs : 100 --> loss: 1.6846 - acc: 0.5506 [1.6846214532852173, 0.550561785697937] acc: 0.7500 - val_loss: 4.4220 - val_acc: 0.0926
###Code
x_data[38]
y_data[38]
pred = model.predict([[1.307e+01, 1.500e+00, 2.100e+00, 1.550e+01, 9.800e+01, 2.400e+00,
2.640e+00, 2.800e-01, 1.370e+00, 3.700e+00, 1.180e+00, 2.690e+00,
1.020e+03]])
pred
np.argmax(pred)
###Output
_____no_output_____
###Markdown
###Code
from sklearn import datasets
import matplotlib.pyplot as plt
import math
wine = datasets.load_wine()
#print(wine.DESCR)
print(len(wine.data))
print(len(wine.data[0]))
# Select all rows and only first two columns (sepal length/width)
X = wine.data[:,2:4]
# Target will be used to plot samples in different colors for different species
Y = wine.target
plt.scatter(X[:,0], X[:,1], c=Y)
plt.xlabel('Sepal Length (cm)')
plt.ylabel('Sepal Width (cm)')
plt.title('Sepal size distribution')
def sigmoid(z):
return 1.0/(1 + math.e ** (-z))
def predict(sample):
result = 0.0
for i in range(len(sample)):
result = result + weights[i] * sample[i]
result = result + bias
return sigmoid(result)
def loss(y_train, y_predicted):
return -(y_train * math.log(y_predicted) + (1.0 - y_train) * math.log(1 - y_predicted))
num_features = wine.data.shape[1]
def train_one_epoch(x_train_samples, y_train_samples):
cost = 0.0
dw = [0.0] * num_features
db = 0.0
global bias, weights
m = len(x_train_samples)
for i in range(m):
x_sample = x_train_samples[i]
y_sample = y_train_samples[i]
predicted = predict(x_sample)
cost = cost + loss(y_sample, predicted)
# dz is the derivative of the loss function
dz = predicted - y_sample
for j in range(len(weights)):
dw[j] = dw[j] + x_sample[j] * dz
db = db + dz
cost = cost / m
db = db / m
bias = bias - learning_rate*db
for j in range(len(weights)):
dw[j] = dw[j] / m
weights[j] = weights[j] - learning_rate*dw[j]
return cost
# Model will "learn" values for the weights and biases
weights = [0.0] * num_features
bias = 0.0
learning_rate = 0.4
epochs = 5000
x_train_samples = wine.data / wine.data.max()
y_train_samples = [1 if y == 2 else 0 for y in wine.target]
loss_array = []
for epoch in range(epochs):
loss_value = train_one_epoch(x_train_samples, y_train_samples)
loss_array.append(loss_value)
plt.plot(range(epochs), loss_array)
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.title('Loss vs. Epoch')
plt.show()
predictions = []
m = len(x_train_samples)
correct = 0
for i in range(m):
sample = x_train_samples[i]
value = predict(sample)
predictions.append(value)
if value >= 0.5:
value = 1
else:
value = 0
if value == y_train_samples[i]:
correct = correct + 1.0
plt.plot(range(m), predictions, label='Predicted')
plt.plot(range(m), y_train_samples, label='Ground truth')
plt.ylabel('Prediction')
plt.xlabel('Sample')
plt.legend(loc='best')
plt.show()
print('Accuracy: %.2f %%' % (100 * correct/m))
###Output
_____no_output_____ |
doc/source/tutorials/pyam_logo.ipynb | ###Markdown
Make our Logo!The logo combines a number of fun **pyam** features, including- line plots- filling data between lines- adding ranges of final-year data
###Code
import itertools
import pyam
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-deep')
def func(x, factor):
return np.sin(x) + factor * x
x = np.linspace(0, 4, 100)
combinations = itertools.product(['m1', 'm2', 'm3', 'm4'], ['s1', 's2', 's3'])
data = [[m, s] + ['r', 'v', 'u'] + list(func(x, 0.5 + 0.1 * i)) for i, (m, s) in enumerate(combinations)]
df = pyam.IamDataFrame(pd.DataFrame(data, columns=pyam.IAMC_IDX + list(range(len(x)))))
df.head()
fig, ax = plt.subplots()
df.filter(scenario='s2').plot(ax=ax, color='model', legend=False, title=False)
df.filter(scenario='s2', keep=False).plot(ax=ax, linewidth=0.5, color='model',
legend=False, title=False)
df.plot(ax=ax, alpha=0, color='model', fill_between=True, final_ranges=dict(linewidth=4),
legend=False, title=False)
plt.axis('off')
plt.tight_layout()
fig.savefig('logo.pdf', bbox_inches='tight', transparent=True, pad_inches=0)
###Output
_____no_output_____
###Markdown
Make our Logo!The logo combines a number of fun **pyam** features, including- line plots- filling data between lines- adding ranges of final-year data
###Code
import itertools
import pyam
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-deep')
def func(x, factor):
return np.sin(x) + factor * x
x = np.linspace(0, 4, 100)
combinations = itertools.product(['m1', 'm2', 'm3', 'm4'], ['s1', 's2', 's3'])
data = [[m, s] + ['r', 'v', 'u'] + list(func(x, 0.5 + 0.1 * i)) for i, (m, s) in enumerate(combinations)]
df = pyam.IamDataFrame(pd.DataFrame(data, columns=pyam.IAMC_IDX + list(range(len(x)))))
df.head()
fig, ax = plt.subplots()
df.filter(scenario='s2').line_plot(ax=ax, color='model', legend=False, title=False)
df.filter(scenario='s2', keep=False).line_plot(ax=ax, linewidth=0.5, color='model', legend=False, title=False)
df.line_plot(ax=ax, alpha=0, color='model', fill_between=True, final_ranges=dict(linewidth=4), legend=False, title=False)
plt.axis('off')
plt.tight_layout()
fig.savefig('logo.pdf', bbox_inches='tight', transparent=True, pad_inches=0)
###Output
_____no_output_____
###Markdown
Make our Logo!The logo combines a number of fun `pyam` features, including- line plots- filling data between lines- adding ranges of final-year data
###Code
import itertools
import pyam
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-deep')
def func(x, factor):
return np.sin(x) + factor * x
x = np.linspace(0, 4, 100)
combinations = itertools.product(['m1', 'm2', 'm3', 'm4'], ['s1', 's2', 's3'])
data = [[m, s] + ['r', 'v', 'u'] + list(func(x, 0.5 + 0.1 * i)) for i, (m, s) in enumerate(combinations)]
df = pyam.IamDataFrame(pd.DataFrame(data, columns=pyam.IAMC_IDX + list(range(len(x)))))
df.head()
fig, ax = plt.subplots()
df.filter(scenario='s2').line_plot(ax=ax, color='model', legend=False, title=False)
df.filter(scenario='s2', keep=False).line_plot(ax=ax, linewidth=0.5, color='model', legend=False, title=False)
df.line_plot(ax=ax, alpha=0, color='model', fill_between=True, final_ranges=dict(linewidth=4), legend=False, title=False)
plt.axis('off')
plt.tight_layout()
fig.savefig('logo.pdf', bbox_inches='tight', transparent=True, pad_inches=0)
###Output
_____no_output_____ |
Jupyter/KitchenSinkCSharpQuantBookTemplate.ipynb | ###Markdown
 Welcome to The QuantConnect Research Page Refer to this page for documentation https://www.quantconnect.com/docsIntroduction-to-Jupyter Contribute to this template file https://github.com/QuantConnect/Lean/blob/master/Jupyter/BasicCSharpQuantBookTemplate.ipynb QuantBook Basics Start QuantBook- Load "QuantConnect.csx" with all the basic imports- Create a QuantBook instance
###Code
#load "QuantConnect.csx"
using QuantConnect.Data.Custom;
using QuantConnect.Data.Market;
var qb = new QuantBook();
###Output
_____no_output_____
###Markdown
Selecting Asset DataCheckout the QuantConnect [docs](https://www.quantconnect.com/docsInitializing-Algorithms-Selecting-Asset-Data) to learn how to select asset data.
###Code
var spy = qb.AddEquity("SPY");
var eur = qb.AddForex("EURUSD");
var btc = qb.AddCrypto("BTCUSD");
var fxv = qb.AddData<FxcmVolume>("EURUSD_Vol", Resolution.Hour);
###Output
_____no_output_____
###Markdown
Historical Data RequestsWe can use the QuantConnect API to make Historical Data Requests. The data will be presented as multi-index pandas.DataFrame where the first index is the Symbol.For more information, please follow the [link](https://www.quantconnect.com/docsHistorical-Data-Historical-Data-Requests).
###Code
// Gets historical data from the subscribed assets, the last 360 datapoints with daily resolution
var h1 = qb.History(qb.Securities.Keys, 360, Resolution.Daily);
// Gets historical data from the subscribed assets, from the last 30 days with daily resolution
var h2 = qb.History(qb.Securities.Keys, TimeSpan.FromDays(360), Resolution.Daily);
// Gets historical data from the subscribed assets, between two dates with daily resolution
var h3 = qb.History(btc.Symbol, new DateTime(2014,1,1), DateTime.Now, Resolution.Daily);
// Only fetchs historical data from a desired symbol
var h4 = qb.History(spy.Symbol, 360, Resolution.Daily);
// Only fetchs historical data from a desired symbol
var h5 = qb.History<QuoteBar>(eur.Symbol, TimeSpan.FromDays(360), Resolution.Daily);
// Fetchs custom data
var h6 = qb.History<FxcmVolume>(fxv.Symbol, TimeSpan.FromDays(360));
###Output
_____no_output_____
###Markdown
 Welcome to The QuantConnect Research Page Refer to this page for documentation https://www.quantconnect.com/docsIntroduction-to-Jupyter Contribute to this template file https://github.com/QuantConnect/Lean/blob/master/Jupyter/BasicCSharpQuantBookTemplate.ipynb QuantBook Basics Start QuantBook- Load "QuantConnect.csx" with all the basic imports- Create a QuantBook instance
###Code
#load "QuantConnect.csx"
using QuantConnect.Data.Custom;
using QuantConnect.Data.Market;
var qb = new QuantBook();
###Output
_____no_output_____
###Markdown
Selecting Asset DataCheckout the QuantConnect [docs](https://www.quantconnect.com/docsInitializing-Algorithms-Selecting-Asset-Data) to learn how to select asset data.
###Code
var spy = qb.AddEquity("SPY");
var eur = qb.AddForex("EURUSD");
var btc = qb.AddCrypto("BTCUSD");
var fxv = qb.AddData<FxcmVolume>("EURUSD_Vol", Resolution.Hour);
###Output
_____no_output_____
###Markdown
Historical Data RequestsWe can use the QuantConnect API to make Historical Data Requests. The data will be presented as multi-index pandas.DataFrame where the first index is the Symbol.For more information, please follow the [link](https://www.quantconnect.com/docsHistorical-Data-Historical-Data-Requests).
###Code
// Gets historical data from the subscribed assets, the last 360 datapoints with daily resolution
var h1 = qb.History(qb.Securities.Keys, 360, Resolution.Daily);
// Gets historical data from the subscribed assets, from the last 30 days with daily resolution
var h2 = qb.History(qb.Securities.Keys, TimeSpan.FromDays(360), Resolution.Daily);
// Gets historical data from the subscribed assets, between two dates with daily resolution
var h3 = qb.History(btc.Symbol, new DateTime(2014,1,1), DateTime.Now, Resolution.Daily);
// Only fetchs historical data from a desired symbol
var h4 = qb.History(spy.Symbol, 360, Resolution.Daily);
// Only fetchs historical data from a desired symbol
var h5 = qb.History<QuoteBar>(eur.Symbol, TimeSpan.FromDays(360), Resolution.Daily);
// Fetchs custom data
var h6 = qb.History<FxcmVolume>(fxv.Symbol, TimeSpan.FromDays(360));
###Output
_____no_output_____ |
18_tfagents/6_reinforce_tutorial.ipynb | ###Markdown
Copyright 2021 The TF-Agents Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
REINFORCE agent View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Introduction This example shows how to train a [REINFORCE](https://www-anw.cs.umass.edu/~barto/courses/cs687/williams92simple.pdf) agent on the Cartpole environment using the TF-Agents library, similar to the [DQN tutorial](1_dqn_tutorial.ipynb).We will walk you through all the components in a Reinforcement Learning (RL) pipeline for training, evaluation and data collection. Setup If you haven't installed the following dependencies, run:
###Code
!sudo apt-get install -y xvfb ffmpeg
!pip install -q 'imageio==2.4.0'
!pip install -q pyvirtualdisplay
!pip install -q tf-agents
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import base64
import imageio
import IPython
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import PIL.Image
import pyvirtualdisplay
import tensorflow as tf
from tf_agents.agents.reinforce import reinforce_agent
from tf_agents.drivers import dynamic_step_driver
from tf_agents.environments import suite_gym
from tf_agents.environments import tf_py_environment
from tf_agents.eval import metric_utils
from tf_agents.metrics import tf_metrics
from tf_agents.networks import actor_distribution_network
from tf_agents.replay_buffers import tf_uniform_replay_buffer
from tf_agents.trajectories import trajectory
from tf_agents.utils import common
tf.compat.v1.enable_v2_behavior()
# Set up a virtual display for rendering OpenAI gym environments.
display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()
###Output
_____no_output_____
###Markdown
Hyperparameters
###Code
env_name = "CartPole-v0" # @param {type:"string"}
num_iterations = 250 # @param {type:"integer"}
collect_episodes_per_iteration = 2 # @param {type:"integer"}
replay_buffer_capacity = 2000 # @param {type:"integer"}
fc_layer_params = (100,)
learning_rate = 1e-3 # @param {type:"number"}
log_interval = 25 # @param {type:"integer"}
num_eval_episodes = 10 # @param {type:"integer"}
eval_interval = 50 # @param {type:"integer"}
###Output
_____no_output_____
###Markdown
EnvironmentEnvironments in RL represent the task or problem that we are trying to solve. Standard environments can be easily created in TF-Agents using `suites`. We have different `suites` for loading environments from sources such as the OpenAI Gym, Atari, DM Control, etc., given a string environment name.Now let us load the CartPole environment from the OpenAI Gym suite.
###Code
env = suite_gym.load(env_name)
###Output
_____no_output_____
###Markdown
We can render this environment to see how it looks. A free-swinging pole is attached to a cart. The goal is to move the cart right or left in order to keep the pole pointing up.
###Code
#@test {"skip": true}
env.reset()
PIL.Image.fromarray(env.render())
###Output
_____no_output_____
###Markdown
The `time_step = environment.step(action)` statement takes `action` in the environment. The `TimeStep` tuple returned contains the environment's next observation and reward for that action. The `time_step_spec()` and `action_spec()` methods in the environment return the specifications (types, shapes, bounds) of the `time_step` and `action` respectively.
###Code
print('Observation Spec:')
print(env.time_step_spec().observation)
print('Action Spec:')
print(env.action_spec())
###Output
Observation Spec:
BoundedArraySpec(shape=(4,), dtype=dtype('float32'), name='observation', minimum=[-4.8000002e+00 -3.4028235e+38 -4.1887903e-01 -3.4028235e+38], maximum=[4.8000002e+00 3.4028235e+38 4.1887903e-01 3.4028235e+38])
Action Spec:
BoundedArraySpec(shape=(), dtype=dtype('int64'), name='action', minimum=0, maximum=1)
###Markdown
So, we see that observation is an array of 4 floats: the position and velocity of the cart, and the angular position and velocity of the pole. Since only two actions are possible (move left or move right), the `action_spec` is a scalar where 0 means "move left" and 1 means "move right."
###Code
time_step = env.reset()
print('Time step:')
print(time_step)
action = np.array(1, dtype=np.int32)
next_time_step = env.step(action)
print('Next time step:')
print(next_time_step)
###Output
Time step:
TimeStep(step_type=array(0, dtype=int32), reward=array(0., dtype=float32), discount=array(1., dtype=float32), observation=array([ 0.03037029, -0.02667559, 0.00637377, -0.03489717], dtype=float32))
Next time step:
TimeStep(step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32), observation=array([ 0.02983678, 0.16835438, 0.00567583, -0.3255623 ], dtype=float32))
###Markdown
Usually we create two environments: one for training and one for evaluation. Most environments are written in pure python, but they can be easily converted to TensorFlow using the `TFPyEnvironment` wrapper. The original environment's API uses numpy arrays, the `TFPyEnvironment` converts these to/from `Tensors` for you to more easily interact with TensorFlow policies and agents.
###Code
train_py_env = suite_gym.load(env_name)
eval_py_env = suite_gym.load(env_name)
train_env = tf_py_environment.TFPyEnvironment(train_py_env)
eval_env = tf_py_environment.TFPyEnvironment(eval_py_env)
###Output
_____no_output_____
###Markdown
AgentThe algorithm that we use to solve an RL problem is represented as an `Agent`. In addition to the REINFORCE agent, TF-Agents provides standard implementations of a variety of `Agents` such as [DQN](https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf), [DDPG](https://arxiv.org/pdf/1509.02971.pdf), [TD3](https://arxiv.org/pdf/1802.09477.pdf), [PPO](https://arxiv.org/abs/1707.06347) and [SAC](https://arxiv.org/abs/1801.01290).To create a REINFORCE Agent, we first need an `Actor Network` that can learn to predict the action given an observation from the environment.We can easily create an `Actor Network` using the specs of the observations and actions. We can specify the layers in the network which, in this example, is the `fc_layer_params` argument set to a tuple of `ints` representing the sizes of each hidden layer (see the Hyperparameters section above).
###Code
actor_net = actor_distribution_network.ActorDistributionNetwork(
train_env.observation_spec(),
train_env.action_spec(),
fc_layer_params=fc_layer_params)
###Output
_____no_output_____
###Markdown
We also need an `optimizer` to train the network we just created, and a `train_step_counter` variable to keep track of how many times the network was updated.
###Code
optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate)
train_step_counter = tf.compat.v2.Variable(0)
tf_agent = reinforce_agent.ReinforceAgent(
train_env.time_step_spec(),
train_env.action_spec(),
actor_network=actor_net,
optimizer=optimizer,
normalize_returns=True,
train_step_counter=train_step_counter)
tf_agent.initialize()
###Output
_____no_output_____
###Markdown
PoliciesIn TF-Agents, policies represent the standard notion of policies in RL: given a `time_step` produce an action or a distribution over actions. The main method is `policy_step = policy.step(time_step)` where `policy_step` is a named tuple `PolicyStep(action, state, info)`. The `policy_step.action` is the `action` to be applied to the environment, `state` represents the state for stateful (RNN) policies and `info` may contain auxiliary information such as log probabilities of the actions.Agents contain two policies: the main policy that is used for evaluation/deployment (agent.policy) and another policy that is used for data collection (agent.collect_policy).
###Code
eval_policy = tf_agent.policy
collect_policy = tf_agent.collect_policy
###Output
_____no_output_____
###Markdown
Metrics and EvaluationThe most common metric used to evaluate a policy is the average return. The return is the sum of rewards obtained while running a policy in an environment for an episode, and we usually average this over a few episodes. We can compute the average return metric as follows.
###Code
#@test {"skip": true}
def compute_avg_return(environment, policy, num_episodes=10):
total_return = 0.0
for _ in range(num_episodes):
time_step = environment.reset()
episode_return = 0.0
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = environment.step(action_step.action)
episode_return += time_step.reward
total_return += episode_return
avg_return = total_return / num_episodes
return avg_return.numpy()[0]
# Please also see the metrics module for standard implementations of different
# metrics.
###Output
_____no_output_____
###Markdown
Replay BufferIn order to keep track of the data collected from the environment, we will use the TFUniformReplayBuffer. This replay buffer is constructed using specs describing the tensors that are to be stored, which can be obtained from the agent using `tf_agent.collect_data_spec`.
###Code
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=tf_agent.collect_data_spec,
batch_size=train_env.batch_size,
max_length=replay_buffer_capacity)
###Output
_____no_output_____
###Markdown
For most agents, the `collect_data_spec` is a `Trajectory` named tuple containing the observation, action, reward etc. Data CollectionAs REINFORCE learns from whole episodes, we define a function to collect an episode using the given data collection policy and save the data (observations, actions, rewards etc.) as trajectories in the replay buffer.
###Code
#@test {"skip": true}
def collect_episode(environment, policy, num_episodes):
episode_counter = 0
environment.reset()
while episode_counter < num_episodes:
time_step = environment.current_time_step()
action_step = policy.action(time_step)
next_time_step = environment.step(action_step.action)
traj = trajectory.from_transition(time_step, action_step, next_time_step)
# Add trajectory to the replay buffer
replay_buffer.add_batch(traj)
if traj.is_boundary():
episode_counter += 1
# This loop is so common in RL, that we provide standard implementations of
# these. For more details see the drivers module.
###Output
_____no_output_____
###Markdown
Training the agentThe training loop involves both collecting data from the environment and optimizing the agent's networks. Along the way, we will occasionally evaluate the agent's policy to see how we are doing.The following will take ~3 minutes to run.
###Code
#@test {"skip": true}
try:
%%time
except:
pass
# (Optional) Optimize by wrapping some of the code in a graph using TF function.
tf_agent.train = common.function(tf_agent.train)
# Reset the train step
tf_agent.train_step_counter.assign(0)
# Evaluate the agent's policy once before training.
avg_return = compute_avg_return(eval_env, tf_agent.policy, num_eval_episodes)
returns = [avg_return]
for _ in range(num_iterations):
# Collect a few episodes using collect_policy and save to the replay buffer.
collect_episode(
train_env, tf_agent.collect_policy, collect_episodes_per_iteration)
# Use data from the buffer and update the agent's network.
experience = replay_buffer.gather_all()
train_loss = tf_agent.train(experience)
replay_buffer.clear()
step = tf_agent.train_step_counter.numpy()
if step % log_interval == 0:
print('step = {0}: loss = {1}'.format(step, train_loss.loss))
if step % eval_interval == 0:
avg_return = compute_avg_return(eval_env, tf_agent.policy, num_eval_episodes)
print('step = {0}: Average Return = {1}'.format(step, avg_return))
returns.append(avg_return)
###Output
WARNING:tensorflow:From <ipython-input-1-235ae48023f9>:24: ReplayBuffer.gather_all (from tf_agents.replay_buffers.replay_buffer) is deprecated and will be removed in a future version.
Instructions for updating:
Use `as_dataset(..., single_deterministic_pass=True)` instead.
###Markdown
Visualization PlotsWe can plot return vs global steps to see the performance of our agent. In `Cartpole-v0`, the environment gives a reward of +1 for every time step the pole stays up, and since the maximum number of steps is 200, the maximum possible return is also 200.
###Code
#@test {"skip": true}
steps = range(0, num_iterations + 1, eval_interval)
plt.plot(steps, returns)
plt.ylabel('Average Return')
plt.xlabel('Step')
plt.ylim(top=250)
###Output
_____no_output_____
###Markdown
Videos It is helpful to visualize the performance of an agent by rendering the environment at each step. Before we do that, let us first create a function to embed videos in this colab.
###Code
def embed_mp4(filename):
"""Embeds an mp4 file in the notebook."""
video = open(filename,'rb').read()
b64 = base64.b64encode(video)
tag = '''
<video width="640" height="480" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>'''.format(b64.decode())
return IPython.display.HTML(tag)
###Output
_____no_output_____
###Markdown
The following code visualizes the agent's policy for a few episodes:
###Code
num_episodes = 3
video_filename = 'imageio.mp4'
with imageio.get_writer(video_filename, fps=60) as video:
for _ in range(num_episodes):
time_step = eval_env.reset()
video.append_data(eval_py_env.render())
while not time_step.is_last():
action_step = tf_agent.policy.action(time_step)
time_step = eval_env.step(action_step.action)
video.append_data(eval_py_env.render())
embed_mp4(video_filename)
###Output
WARNING:root:IMAGEIO FFMPEG_WRITER WARNING: input image is not divisible by macro_block_size=16, resizing from (400, 600) to (400, 608) to ensure video compatibility with most codecs and players. To prevent resizing, make your input image divisible by the macro_block_size or set the macro_block_size to None (risking incompatibility). You may also see a FFMPEG warning concerning speedloss due to data not being aligned.
|
week3_model_free/qlearning.ipynb | ###Markdown
Q-learningThis notebook will guide you through implementation of vanilla Q-learning algorithm.You need to implement QLearningAgent (follow instructions for each method) and use it on a number of tests below.
###Code
#XVFB will be launched if you run on a server
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY"))==0:
!bash ../xvfb start
%env DISPLAY=:1
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
%%writefile qlearning.py
from collections import defaultdict
import random, math
import numpy as np
class QLearningAgent:
def __init__(self, alpha, epsilon, discount, get_legal_actions):
"""
Q-Learning Agent
based on http://inst.eecs.berkeley.edu/~cs188/sp09/pacman.html
Instance variables you have access to
- self.epsilon (exploration prob)
- self.alpha (learning rate)
- self.discount (discount rate aka gamma)
Functions you should use
- self.get_legal_actions(state) {state, hashable -> list of actions, each is hashable}
which returns legal actions for a state
- self.get_qvalue(state,action)
which returns Q(state,action)
- self.set_qvalue(state,action,value)
which sets Q(state,action) := value
!!!Important!!!
Note: please avoid using self._qValues directly.
There's a special self.get_qvalue/set_qvalue for that.
"""
self.get_legal_actions = get_legal_actions
self._qvalues = defaultdict(lambda: defaultdict(lambda: 0))
self.alpha = alpha
self.epsilon = epsilon
self.discount = discount
def get_qvalue(self, state, action):
""" Returns Q(state,action) """
return self._qvalues[state][action]
def set_qvalue(self,state,action,value):
""" Sets the Qvalue for [state,action] to the given value """
self._qvalues[state][action] = value
#---------------------START OF YOUR CODE---------------------#
def get_value(self, state):
"""
Compute your agent's estimate of V(s) using current q-values
V(s) = max_over_action Q(state,action) over possible actions.
Note: please take into account that q-values can be negative.
"""
possible_actions = self.get_legal_actions(state)
#If there are no legal actions, return 0.0
if len(possible_actions) == 0:
return 0.0
return max([self.get_qvalue(state, action) for action in possible_actions])
# return value
def update(self, state, action, reward, next_state):
"""
You should do your Q-Value update here:
Q(s,a) := (1 - alpha) * Q(s,a) + alpha * (r + gamma * V(s'))
"""
#agent parameters
gamma = self.discount
learning_rate = self.alpha
Q = (1 - learning_rate) * self.get_qvalue(state, action) + \
learning_rate * (reward + gamma * self.get_value(next_state))
self.set_qvalue(state, action, Q)
def get_best_action(self, state):
"""
Compute the best action to take in a state (using current q-values).
"""
possible_actions = self.get_legal_actions(state)
#If there are no legal actions, return None
if len(possible_actions) == 0:
return None
best_action = None
highest_Q = 0
for action in possible_actions:
Q = self.get_qvalue(state, action)
if best_action is None or Q > highest_Q:
highest_Q = Q
best_action = action
return best_action
def get_action(self, state):
"""
Compute the action to take in the current state, including exploration.
With probability self.epsilon, we should take a random action.
otherwise - the best policy action (self.getPolicy).
Note: To pick randomly from a list, use random.choice(list).
To pick True or False with a given probablity, generate uniform number in [0, 1]
and compare it with your probability
"""
# Pick Action
possible_actions = self.get_legal_actions(state)
action = None
#If there are no legal actions, return None
if len(possible_actions) == 0:
return None
#agent parameters:
epsilon = self.epsilon
if random.random() < epsilon:
return random.choice(possible_actions)
else:
return self.get_best_action(state)
# return chosen_action
###Output
Overwriting qlearning.py
###Markdown
Try it on taxiHere we use the qlearning agent on taxi env from openai gym.You will need to insert a few agent functions here.
###Code
import gym
env = gym.make("Taxi-v2")
n_actions = env.action_space.n
from qlearning import QLearningAgent
agent = QLearningAgent(alpha=0.5, epsilon=0.25, discount=0.99,
get_legal_actions = lambda s: range(n_actions))
def play_and_train(env,agent,t_max=10**4, fig=None, ax=None):
"""
This function should
- run a full game, actions given by agent's e-greedy policy
- train agent using agent.update(...) whenever it is possible
- return total reward
"""
total_reward = 0.0
s = env.reset()
for t in range(t_max):
# get agent to pick action given state s.
a = agent.get_action(s)
next_s, r, done, _ = env.step(a)
if fig is not None and ax is not None:
ax.clear()
ax.imshow(env.render('rgb_array'))
fig.canvas.draw()
# train (update) agent for state s
agent.update(s, a, r, next_s)
s = next_s
total_reward +=r
if done:
if fig is not None and ax is not None:
print('Done', done, t)
break
else:
if fig is not None and ax is not None:
print('Lost', t)
return total_reward
from IPython.display import clear_output
rewards = []
for i in range(1000):
rewards.append(play_and_train(env, agent))
agent.epsilon *= 0.99
if i %100 ==0:
clear_output(True)
print('eps =', agent.epsilon, 'mean reward =', np.mean(rewards[-10:]))
plt.plot(rewards)
plt.show()
###Output
eps = 2.9191091959171894e-05 mean reward = 8.1
###Markdown
Submit to Coursera I: Preparation
###Code
#from submit import submit_qlearning1
#submit_qlearning1(rewards, '[email protected]', 'hiOAcihu5hKcEvsh')
submit_rewards1 = rewards.copy()
###Output
_____no_output_____
###Markdown
Binarized state spacesUse agent to train efficiently on CartPole-v0.This environment has a continuous set of possible states, so you will have to group them into bins somehow.The simplest way is to use `round(x,n_digits)` (or numpy round) to round real number to a given amount of digits.The tricky part is to get the n_digits right for each state to train effectively.Note that you don't need to convert state to integers, but to __tuples__ of any kind of values.
###Code
env = gym.make("CartPole-v0")
n_actions = env.action_space.n
print("first state:%s" % (env.reset()))
plt.imshow(env.render('rgb_array'))
###Output
first state:[0.01830095 0.03724055 0.02582129 0.04906594]
###Markdown
Play a few gamesWe need to estimate observation distributions. To do so, we'll play a few games and record all states.
###Code
%matplotlib inline
all_states = []
for _ in range(1000):
all_states.append(env.reset())
done = False
while not done:
s, r, done, _ = env.step(env.action_space.sample())
all_states.append(s)
if done: break
all_states = np.array(all_states)
for obs_i in range(env.observation_space.shape[0]):
plt.hist(all_states[:, obs_i], bins=20)
plt.show()
###Output
_____no_output_____
###Markdown
Binarize environment
###Code
from gym.core import ObservationWrapper
class Binarizer(ObservationWrapper):
# pass
def _observation(self, state):
# return tuple(state)
#state = <round state to some amount digits.>
#hint: you can do that with round(x,n_digits)
#you will need to pick a different n_digits for each dimension
state = np.round(state*2, 1)/2
return tuple(state)
env = Binarizer(gym.make("CartPole-v0"))
all_states = []
for _ in range(1000):
all_states.append(env.reset())
done = False
while not done:
s, r, done, _ = env.step(env.action_space.sample())
all_states.append(s)
if done: break
print("States number: ", len(set(all_states)))
all_states = np.array(all_states)
for obs_i in range(env.observation_space.shape[0]):
plt.hist(all_states[:,obs_i],bins=20)
plt.show()
###Output
States number: 7552
###Markdown
Learn binarized policyNow let's train a policy that uses binarized state space.__Tips:__ * If your binarization is too coarse, your agent may fail to find optimal policy. In that case, change binarization. * If your binarization is too fine-grained, your agent will take much longer than 1000 steps to converge. You can either increase number of iterations and decrease epsilon decay or change binarization.* Having 10^3 ~ 10^4 distinct states is recommended (`len(QLearningAgent._qvalues)`), but not required.
###Code
agent = QLearningAgent(alpha=0.5, epsilon=0.25, discount=0.99,
get_legal_actions = lambda s: range(n_actions))
%matplotlib inline
rewards = []
for i in range(10000):
rewards.append(play_and_train(env,agent))
#OPTIONAL YOUR CODE: adjust epsilon
if i %100 ==0:
clear_output(True)
print('eps =', agent.epsilon, 'mean reward =', np.mean(rewards[-10:]))
plt.plot(rewards)
plt.show()
###Output
eps = 0 mean reward = 126.4
###Markdown
Visualization
###Code
%matplotlib notebook
fig = plt.figure()
ax = fig.add_subplot(111)
fig.show()
eps = agent.epsilon
agent.epsilon = 0
play_and_train(env, agent, fig=fig, ax=ax)
agent.epsilon = eps
###Output
_____no_output_____
###Markdown
Submit to Coursera II: Submission
###Code
# from submit import submit_qlearning2
# submit_qlearning2(rewards, <EMAIL>, <TOKEN>)
submit_rewards2 = rewards.copy()
from submit import submit_qlearning_all
submit_qlearning_all(submit_rewards1, submit_rewards2, '[email protected]', 'lbW65gWwyOQHjgzU')
###Output
Submitted to Coursera platform. See results on assignment page!
###Markdown
Q-learningThis notebook will guide you through implementation of vanilla Q-learning algorithm.You need to implement QLearningAgent (follow instructions for each method) and use it on a number of tests below.
###Code
import sys, os
if 'google.colab' in sys.modules and not os.path.exists('.setup_complete'):
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/setup_colab.sh -O- | bash
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/coursera/grading.py -O ../grading.py
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/coursera/week3_model_free/submit.py
!touch .setup_complete
# This code creates a virtual display to draw game images on.
# It will have no effect if your machine has a monitor.
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
os.environ['DISPLAY'] = ':1'
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from collections import defaultdict
import random
import math
import numpy as np
class QLearningAgent:
def __init__(self, alpha, epsilon, discount, get_legal_actions):
"""
Q-Learning Agent
based on https://inst.eecs.berkeley.edu/~cs188/sp19/projects.html
Instance variables you have access to
- self.epsilon (exploration prob)
- self.alpha (learning rate)
- self.discount (discount rate aka gamma)
Functions you should use
- self.get_legal_actions(state) {state, hashable -> list of actions, each is hashable}
which returns legal actions for a state
- self.get_qvalue(state,action)
which returns Q(state,action)
- self.set_qvalue(state,action,value)
which sets Q(state,action) := value
!!!Important!!!
Note: please avoid using self._qValues directly.
There's a special self.get_qvalue/set_qvalue for that.
"""
self.get_legal_actions = get_legal_actions
self._qvalues = defaultdict(lambda: defaultdict(lambda: 0))
self.alpha = alpha
self.epsilon = epsilon
self.discount = discount
def get_qvalue(self, state, action):
""" Returns Q(state,action) """
return self._qvalues[state][action]
def set_qvalue(self, state, action, value):
""" Sets the Qvalue for [state,action] to the given value """
self._qvalues[state][action] = value
#---------------------START OF YOUR CODE---------------------#
def get_value(self, state):
"""
Compute your agent's estimate of V(s) using current q-values
V(s) = max_over_action Q(state,action) over possible actions.
Note: please take into account that q-values can be negative.
"""
possible_actions = self.get_legal_actions(state)
# If there are no legal actions, return 0.0
if len(possible_actions) == 0:
return 0.0
action_values = [self.get_qvalue(state, action) for action in possible_actions]
value = np.max(action_values)
return value
def update(self, state, action, reward, next_state):
"""
You should do your Q-Value update here:
Q(s,a) := (1 - alpha) * Q(s,a) + alpha * (r + gamma * V(s'))
"""
# agent parameters
gamma = self.discount
learning_rate = self.alpha
q_value = self.get_qvalue(state, action)
q_value = (1 - learning_rate) * q_value\
+ learning_rate * (reward + gamma * self.get_value(next_state))
self.set_qvalue(state, action, q_value)
def get_best_action(self, state):
"""
Compute the best action to take in a state (using current q-values).
"""
possible_actions = self.get_legal_actions(state)
# If there are no legal actions, return None
if len(possible_actions) == 0:
return None
action_values = {action : self.get_qvalue(state, action) for action in possible_actions}
best_action = max(action_values, key=action_values.get)
return best_action
def get_action(self, state):
"""
Compute the action to take in the current state, including exploration.
With probability self.epsilon, we should take a random action.
otherwise - the best policy action (self.get_best_action).
Note: To pick randomly from a list, use random.choice(list).
To pick True or False with a given probablity, generate uniform number in [0, 1]
and compare it with your probability
"""
# Pick Action
possible_actions = self.get_legal_actions(state)
action = None
# If there are no legal actions, return None
if len(possible_actions) == 0:
return None
# agent parameters:
epsilon = self.epsilon
if np.random.rand() < self.epsilon:
chosen_action = np.random.choice(possible_actions)
else:
chosen_action = self.get_best_action(state)
return chosen_action
###Output
_____no_output_____
###Markdown
Try it on taxiHere we use the qlearning agent on taxi env from openai gym.You will need to insert a few agent functions here.
###Code
import gym
env = gym.make("Taxi-v3")
n_actions = env.action_space.n
agent = QLearningAgent(
alpha=0.5, epsilon=0.25, discount=0.99,
get_legal_actions=lambda s: range(n_actions))
def play_and_train(env, agent, t_max=10**4):
"""
This function should
- run a full game, actions given by agent's e-greedy policy
- train agent using agent.update(...) whenever it is possible
- return total reward
"""
total_reward = 0.0
s = env.reset()
for t in range(t_max):
# get agent to pick action given state s.
a = agent.get_action(s)
next_s, r, done, _ = env.step(a)
# train (update) agent for state s
agent.update(s, a, r, next_s)
s = next_s
total_reward += r
if done:
break
return total_reward
from IPython.display import clear_output
rewards = []
for i in range(1000):
rewards.append(play_and_train(env, agent))
agent.epsilon *= 0.99
if i % 100 == 0:
clear_output(True)
plt.title('eps = {:e}, mean reward = {:.1f}'.format(agent.epsilon, np.mean(rewards[-10:])))
plt.plot(rewards)
plt.show()
###Output
_____no_output_____
###Markdown
Submit to Coursera I: Preparation
###Code
submit_rewards1 = rewards.copy()
###Output
_____no_output_____
###Markdown
Binarized state spacesUse agent to train efficiently on `CartPole-v0`. This environment has a continuous set of possible states, so you will have to group them into bins somehow.The simplest way is to use `round(x, n_digits)` (or `np.round`) to round a real number to a given amount of digits. The tricky part is to get the `n_digits` right for each state to train effectively.Note that you don't need to convert state to integers, but to __tuples__ of any kind of values.
###Code
def make_env():
return gym.make('CartPole-v0').env # .env unwraps the TimeLimit wrapper
env = make_env()
n_actions = env.action_space.n
print("first state: %s" % (env.reset()))
plt.imshow(env.render('rgb_array'))
###Output
first state: [ 0.00912112 -0.01625993 -0.00136323 0.00312888]
###Markdown
Play a few gamesWe need to estimate observation distributions. To do so, we'll play a few games and record all states.
###Code
def visualize_cartpole_observation_distribution(seen_observations):
seen_observations = np.array(seen_observations)
# The meaning of the observations is documented in
# https://github.com/openai/gym/blob/master/gym/envs/classic_control/cartpole.py
f, axarr = plt.subplots(2, 2, figsize=(16, 9), sharey=True)
for i, title in enumerate(['Cart Position', 'Cart Velocity', 'Pole Angle', 'Pole Velocity At Tip']):
ax = axarr[i // 2, i % 2]
ax.hist(seen_observations[:, i], bins=20)
ax.set_title(title)
xmin, xmax = ax.get_xlim()
ax.set_xlim(min(xmin, -xmax), max(-xmin, xmax))
ax.grid()
f.tight_layout()
seen_observations = []
for _ in range(1000):
seen_observations.append(env.reset())
done = False
while not done:
s, r, done, _ = env.step(env.action_space.sample())
seen_observations.append(s)
visualize_cartpole_observation_distribution(seen_observations)
###Output
_____no_output_____
###Markdown
Binarize environment
###Code
from gym.core import ObservationWrapper
class Binarizer(ObservationWrapper):
def observation(self, state):
# Hint: you can do that with round(x, n_digits).
# You may pick a different n_digits for each dimension.
state = round(state[0], 0), round(state[1], 1), round(state[2], 1), round(state[3], 1)
return tuple(state)
env = Binarizer(make_env())
seen_observations = []
for _ in range(1000):
seen_observations.append(env.reset())
done = False
while not done:
s, r, done, _ = env.step(env.action_space.sample())
seen_observations.append(s)
if done:
break
visualize_cartpole_observation_distribution(seen_observations)
###Output
_____no_output_____
###Markdown
Learn binarized policyNow let's train a policy that uses binarized state space.__Tips:__* Note that increasing the number of digits for one dimension of the observations increases your state space by a factor of $10$.* If your binarization is too fine-grained, your agent will take much longer than 10000 steps to converge. You can either increase the number of iterations and reduce epsilon decay or change binarization. In practice we found that this kind of mistake is rather frequent.* If your binarization is too coarse, your agent may fail to find the optimal policy. In practice we found that on this particular environment this kind of mistake is rare.* **Start with a coarse binarization** and make it more fine-grained if that seems necessary.* Having $10^3$–$10^4$ distinct states is recommended (`len(agent._qvalues)`), but not required.* If things don't work without annealing $\varepsilon$, consider adding that, but make sure that it doesn't go to zero too quickly.A reasonable agent should attain an average reward of at least 50.
###Code
import pandas as pd
def moving_average(x, span=100):
return pd.DataFrame({'x': np.asarray(x)}).x.ewm(span=span).mean().values
agent = QLearningAgent(
alpha=0.5, epsilon=0.25, discount=0.99,
get_legal_actions=lambda s: range(n_actions))
rewards = []
epsilons = []
N = 5000
epsilon_start = 0.25
epsilon_end = 0.05
for i in range(N):
reward = play_and_train(env, agent)
rewards.append(reward)
epsilons.append(agent.epsilon)
if i % 100 == 0:
rewards_ewma = moving_average(rewards)
clear_output(True)
plt.plot(rewards, label='rewards')
plt.plot(rewards_ewma, label='rewards ewma@100')
plt.legend()
plt.grid()
plt.title('eps = {:e}, rewards ewma@100 = {:.1f}'.format(agent.epsilon, rewards_ewma[-1]))
plt.show()
# update epsilon
smooth = max((N - i) / N, 0)
agent.epsilon = (epsilon_start - epsilon_end) * smooth + epsilon_end
###Output
_____no_output_____
###Markdown
Submit to Coursera II: Submission
###Code
submit_rewards2 = rewards.copy()
from submit import submit_qlearning
submit_qlearning(submit_rewards1, submit_rewards2, '[email protected]', 'lkMPdXUqKXP6ikJI')
###Output
Submitted to Coursera platform. See results on assignment page!
###Markdown
Q-learningThis notebook will guide you through implementation of vanilla Q-learning algorithm.You need to implement QLearningAgent (follow instructions for each method) and use it on a number of tests below.
###Code
#XVFB will be launched if you run on a server
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY"))==0:
!bash ../xvfb start
%env DISPLAY=:1
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
%%writefile qlearning.py
from collections import defaultdict
import random, math
import numpy as np
class QLearningAgent:
def __init__(self, alpha, epsilon, discount, get_legal_actions):
"""
Q-Learning Agent
based on http://inst.eecs.berkeley.edu/~cs188/sp09/pacman.html
Instance variables you have access to
- self.epsilon (exploration prob)
- self.alpha (learning rate)
- self.discount (discount rate aka gamma)
Functions you should use
- self.get_legal_actions(state) {state, hashable -> list of actions, each is hashable}
which returns legal actions for a state
- self.get_qvalue(state,action)
which returns Q(state,action)
- self.set_qvalue(state,action,value)
which sets Q(state,action) := value
!!!Important!!!
Note: please avoid using self._qValues directly.
There's a special self.get_qvalue/set_qvalue for that.
"""
self.get_legal_actions = get_legal_actions
self._qvalues = defaultdict(lambda: defaultdict(lambda: 0))
self.alpha = alpha
self.epsilon = epsilon
self.discount = discount
def get_qvalue(self, state, action):
""" Returns Q(state,action) """
return self._qvalues[state][action]
def set_qvalue(self,state,action,value):
""" Sets the Qvalue for [state,action] to the given value """
self._qvalues[state][action] = value
#---------------------START OF YOUR CODE---------------------#
def get_value(self, state):
"""
Compute your agent's estimate of V(s) using current q-values
V(s) = max_over_action Q(state,action) over possible actions.
Note: please take into account that q-values can be negative.
"""
possible_actions = self.get_legal_actions(state)
#If there are no legal actions, return 0.0
if len(possible_actions) == 0:
return 0.0
<YOUR CODE HERE>
return value
def update(self, state, action, reward, next_state):
"""
You should do your Q-Value update here:
Q(s,a) := (1 - alpha) * Q(s,a) + alpha * (r + gamma * V(s'))
"""
#agent parameters
gamma = self.discount
learning_rate = self.alpha
<YOUR CODE HERE>
self.set_qvalue(state, action, <YOUR_QVALUE>)
def get_best_action(self, state):
"""
Compute the best action to take in a state (using current q-values).
"""
possible_actions = self.get_legal_actions(state)
#If there are no legal actions, return None
if len(possible_actions) == 0:
return None
<YOUR CODE HERE>
return best_action
def get_action(self, state):
"""
Compute the action to take in the current state, including exploration.
With probability self.epsilon, we should take a random action.
otherwise - the best policy action (self.getPolicy).
Note: To pick randomly from a list, use random.choice(list).
To pick True or False with a given probablity, generate uniform number in [0, 1]
and compare it with your probability
"""
# Pick Action
possible_actions = self.get_legal_actions(state)
action = None
#If there are no legal actions, return None
if len(possible_actions) == 0:
return None
#agent parameters:
epsilon = self.epsilon
<YOUR CODE HERE>
return chosen_action
###Output
Overwriting qlearning.py
###Markdown
Try it on taxiHere we use the qlearning agent on taxi env from openai gym.You will need to insert a few agent functions here.
###Code
import gym
env = gym.make("Taxi-v2")
n_actions = env.action_space.n
from qlearning import QLearningAgent
agent = QLearningAgent(alpha=0.5, epsilon=0.25, discount=0.99,
get_legal_actions = lambda s: range(n_actions))
def play_and_train(env,agent,t_max=10**4):
"""
This function should
- run a full game, actions given by agent's e-greedy policy
- train agent using agent.update(...) whenever it is possible
- return total reward
"""
total_reward = 0.0
s = env.reset()
for t in range(t_max):
# get agent to pick action given state s.
a = <YOUR CODE>
next_s, r, done, _ = env.step(a)
# train (update) agent for state s
<YOUR CODE HERE>
s = next_s
total_reward +=r
if done: break
return total_reward
from IPython.display import clear_output
rewards = []
for i in range(1000):
rewards.append(play_and_train(env, agent))
agent.epsilon *= 0.99
if i %100 ==0:
clear_output(True)
print('eps =', agent.epsilon, 'mean reward =', np.mean(rewards[-10:]))
plt.plot(rewards)
plt.show()
###Output
eps = 2.9191091959171894e-05 mean reward = 8.5
###Markdown
Submit to Coursera I
###Code
from submit import submit_qlearning1
submit_qlearning1(rewards, <EMAIL>, <TOKEN>)
###Output
_____no_output_____
###Markdown
Binarized state spacesUse agent to train efficiently on CartPole-v0.This environment has a continuous set of possible states, so you will have to group them into bins somehow.The simplest way is to use `round(x,n_digits)` (or numpy round) to round real number to a given amount of digits.The tricky part is to get the n_digits right for each state to train effectively.Note that you don't need to convert state to integers, but to __tuples__ of any kind of values.
###Code
env = gym.make("CartPole-v0")
n_actions = env.action_space.n
print("first state:%s" % (env.reset()))
plt.imshow(env.render('rgb_array'))
###Output
_____no_output_____
###Markdown
Play a few gamesWe need to estimate observation distributions. To do so, we'll play a few games and record all states.
###Code
all_states = []
for _ in range(1000):
all_states.append(env.reset())
done = False
while not done:
s, r, done, _ = env.step(env.action_space.sample())
all_states.append(s)
if done: break
all_states = np.array(all_states)
for obs_i in range(env.observation_space.shape[0]):
plt.hist(all_states[:, obs_i], bins=20)
plt.show()
###Output
_____no_output_____
###Markdown
Binarize environment
###Code
from gym.core import ObservationWrapper
class Binarizer(ObservationWrapper):
def _observation(self, state):
#state = <round state to some amount digits.>
#hint: you can do that with round(x,n_digits)
#you will need to pick a different n_digits for each dimension
return tuple(state)
env = Binarizer(gym.make("CartPole-v0"))
all_states = []
for _ in range(1000):
all_states.append(env.reset())
done = False
while not done:
s, r, done, _ = env.step(env.action_space.sample())
all_states.append(s)
if done: break
all_states = np.array(all_states)
for obs_i in range(env.observation_space.shape[0]):
plt.hist(all_states[:,obs_i],bins=20)
plt.show()
###Output
_____no_output_____
###Markdown
Learn binarized policyNow let's train a policy that uses binarized state space.__Tips:__ * If your binarization is too coarse, your agent may fail to find optimal policy. In that case, change binarization. * If your binarization is too fine-grained, your agent will take much longer than 1000 steps to converge. You can either increase number of iterations and decrease epsilon decay or change binarization.* Having 10^3 ~ 10^4 distinct states is recommended (`len(QLearningAgent._qvalues)`), but not required.
###Code
agent = QLearningAgent(alpha=0.5, epsilon=0.25, discount=0.99,
getLegalActions = lambda s: range(n_actions))
rewards = []
for i in range(1000):
rewards.append(play_and_train(env,agent))
#OPTIONAL YOUR CODE: adjust epsilon
if i %100 ==0:
clear_output(True)
print('eps =', agent.epsilon, 'mean reward =', np.mean(rewards[-10:]))
plt.plot(rewards)
plt.show()
###Output
_____no_output_____
###Markdown
Submit to Coursera II
###Code
from submit import submit_qlearning2
submit_qlearning2(rewards, <EMAIL>, <TOKEN>)
###Output
_____no_output_____
###Markdown
Q-learningThis notebook will guide you through implementation of vanilla Q-learning algorithm.You need to implement QLearningAgent (follow instructions for each method) and use it on a number of tests below.
###Code
import sys, os
if 'google.colab' in sys.modules and not os.path.exists('.setup_complete'):
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/setup_colab.sh -O- | bash
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/coursera/grading.py -O ../grading.py
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/coursera/week3_model_free/submit.py
!touch .setup_complete
# This code creates a virtual display to draw game images on.
# It will have no effect if your machine has a monitor.
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
os.environ['DISPLAY'] = ':1'
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from collections import defaultdict
import random
import math
import numpy as np
class QLearningAgent:
def __init__(self, alpha, epsilon, discount, get_legal_actions):
"""
Q-Learning Agent
based on https://inst.eecs.berkeley.edu/~cs188/sp19/projects.html
Instance variables you have access to
- self.epsilon (exploration prob)
- self.alpha (learning rate)
- self.discount (discount rate aka gamma)
Functions you should use
- self.get_legal_actions(state) {state, hashable -> list of actions, each is hashable}
which returns legal actions for a state
- self.get_qvalue(state,action)
which returns Q(state,action)
- self.set_qvalue(state,action,value)
which sets Q(state,action) := value
!!!Important!!!
Note: please avoid using self._qValues directly.
There's a special self.get_qvalue/set_qvalue for that.
"""
self.get_legal_actions = get_legal_actions
self._qvalues = defaultdict(lambda: defaultdict(lambda: 0))
self.alpha = alpha
self.epsilon = epsilon
self.discount = discount
def get_qvalue(self, state, action):
""" Returns Q(state,action) """
return self._qvalues[state][action]
def set_qvalue(self, state, action, value):
""" Sets the Qvalue for [state,action] to the given value """
self._qvalues[state][action] = value
#---------------------START OF YOUR CODE---------------------#
def get_value(self, state):
"""
Compute your agent's estimate of V(s) using current q-values
V(s) = max_over_action Q(state,action) over possible actions.
Note: please take into account that q-values can be negative.
"""
possible_actions = self.get_legal_actions(state)
# If there are no legal actions, return 0.0
if len(possible_actions) == 0:
return 0.0
value = []
for a in possible_actions:
value.append( self.get_qvalue(state,a) )
return max(value)
def update(self, state, action, reward, next_state):
"""
You should do your Q-Value update here:
Q(s,a) := (1 - alpha) * Q(s,a) + alpha * (r + gamma * V(s'))
"""
# agent parameters
gamma = self.discount
learning_rate = self.alpha
q_new = (1-learning_rate) * self.get_qvalue(state,action) + learning_rate *(reward + gamma * self.get_value(next_state))
self.set_qvalue(state, action, q_new )
def get_best_action(self, state):
"""
Compute the best action to take in a state (using current q-values).
"""
possible_actions = self.get_legal_actions(state)
# If there are no legal actions, return None
if len(possible_actions) == 0:
return None
value, best_action = None, None
for a in possible_actions:
if value == None or value < self.get_qvalue(state,a):
value = self.get_qvalue(state,a)
best_action = a
return best_action
def get_action(self, state):
"""
Compute the action to take in the current state, including exploration.
With probability self.epsilon, we should take a random action.
otherwise - the best policy action (self.get_best_action).
Note: To pick randomly from a list, use random.choice(list).
To pick True or False with a given probablity, generate uniform number in [0, 1]
and compare it with your probability
"""
# Pick Action
possible_actions = self.get_legal_actions(state)
action = None
# If there are no legal actions, return None
if len(possible_actions) == 0:
return None
# agent parameters:
epsilon = self.epsilon
chosen_action = None
if random.random()< epsilon:
chosen_action = random.choice(possible_actions)
else:
chosen_action = self.get_best_action(state)
return chosen_action
###Output
_____no_output_____
###Markdown
Try it on taxiHere we use the qlearning agent on taxi env from openai gym.You will need to insert a few agent functions here.
###Code
import gym
env = gym.make("Taxi-v3")
n_actions = env.action_space.n
agent = QLearningAgent(
alpha=0.5, epsilon=0.25, discount=0.99,
get_legal_actions=lambda s: range(n_actions))
def play_and_train(env, agent, t_max=10**4):
"""
This function should
- run a full game, actions given by agent's e-greedy policy
- train agent using agent.update(...) whenever it is possible
- return total reward
"""
total_reward = 0.0
s = env.reset()
for t in range(t_max):
# get agent to pick action given state s.
a = agent.get_action(s)
next_s, r, done, _ = env.step(a)
# train (update) agent for state s
agent.update(s,a,r,next_s)
s = next_s
total_reward += r
if done:
break
return total_reward
from IPython.display import clear_output
rewards = []
for i in range(1000):
rewards.append(play_and_train(env, agent))
agent.epsilon *= 0.99
if i % 100 == 0:
clear_output(True)
plt.title('eps = {:e}, mean reward = {:.1f}'.format(agent.epsilon, np.mean(rewards[-10:])))
plt.plot(rewards)
plt.show()
###Output
_____no_output_____
###Markdown
Submit to Coursera I: Preparation
###Code
submit_rewards1 = rewards.copy()
###Output
_____no_output_____
###Markdown
Binarized state spacesUse agent to train efficiently on `CartPole-v0`. This environment has a continuous set of possible states, so you will have to group them into bins somehow.The simplest way is to use `round(x, n_digits)` (or `np.round`) to round a real number to a given amount of digits. The tricky part is to get the `n_digits` right for each state to train effectively.Note that you don't need to convert state to integers, but to __tuples__ of any kind of values.
###Code
def make_env():
return gym.make('CartPole-v0').env # .env unwraps the TimeLimit wrapper
env = make_env()
n_actions = env.action_space.n
print("first state: %s" % (env.reset()))
plt.imshow(env.render('rgb_array'))
###Output
first state: [-0.00207288 -0.01012714 -0.03260051 -0.01736818]
###Markdown
Play a few gamesWe need to estimate observation distributions. To do so, we'll play a few games and record all states.
###Code
def visualize_cartpole_observation_distribution(seen_observations):
seen_observations = np.array(seen_observations)
# The meaning of the observations is documented in
# https://github.com/openai/gym/blob/master/gym/envs/classic_control/cartpole.py
f, axarr = plt.subplots(2, 2, figsize=(16, 9), sharey=True)
for i, title in enumerate(['Cart Position', 'Cart Velocity', 'Pole Angle', 'Pole Velocity At Tip']):
ax = axarr[i // 2, i % 2]
ax.hist(seen_observations[:, i], bins=20)
ax.set_title(title)
xmin, xmax = ax.get_xlim()
ax.set_xlim(min(xmin, -xmax), max(-xmin, xmax))
ax.grid()
f.tight_layout()
seen_observations = []
for _ in range(1000):
seen_observations.append(env.reset())
done = False
while not done:
s, r, done, _ = env.step(env.action_space.sample())
seen_observations.append(s)
visualize_cartpole_observation_distribution(seen_observations)
###Output
_____no_output_____
###Markdown
Binarize environment
###Code
from gym.core import ObservationWrapper
class Binarizer(ObservationWrapper):
def observation(self, state):
# Hint: you can do that with round(x, n_digits).
# You may pick a different n_digits for each dimension.
round_para = [1,1,2,1]
state = [ round(s,p) for s,p in zip(state,round_para)]
return tuple(state)
env = Binarizer(make_env())
seen_observations = []
for _ in range(1000):
seen_observations.append(env.reset())
done = False
while not done:
s, r, done, _ = env.step(env.action_space.sample())
seen_observations.append(s)
if done:
break
visualize_cartpole_observation_distribution(seen_observations)
so = np.array(seen_observations)
print(seen_observations[1])
print(seen_observations[10])
for i in range(len(so[0])):
print(len(set(so[:,i])),":",set(so[:,i]))
###Output
(-0.0, 0.2, 0.05, -0.3)
(-0.0, 0.0, 0.06, 0.1)
16 : {-0.0, 0.4, 0.1, 0.3, 0.2, -0.1, -0.2, -0.3, -0.4, -0.5, 0.5, -0.6, -0.9, -0.7, -0.8, 0.6}
46 : {0.0, 0.4, 0.6, -0.2, 0.2, 0.8, 1.0, 1.2, 1.4, 1.1, 0.5, -0.5, 1.5, 2.0, 2.5, -0.4, -0.9, -1.4, -1.9, 1.6, -1.3, -1.8, 2.1, 0.7, -0.8, -0.3, -2.3, 1.7, -0.1, 0.1, -1.7, 2.3, 0.3, -0.7, 1.3, 1.8, -2.1, -1.6, -1.1, 1.9, -0.6, 0.9, -2.0, -1.5, -1.0, -1.2}
53 : {0.07, 0.06, 0.0, -0.06, -0.12, -0.19, -0.07, -0.14, -0.2, -0.24, 0.21, -0.25, 0.25, 0.26, 0.22, -0.01, 0.18, 0.05, -0.08, 0.11, 0.23, 0.19, 0.15, -0.02, 0.04, 0.2, 0.16, -0.13, 0.12, 0.09, -0.03, -0.1, -0.15, -0.22, -0.09, -0.17, -0.23, 0.03, -0.16, 0.13, 0.17, -0.04, 0.02, 0.14, 0.1, 0.08, -0.05, -0.11, 0.01, -0.18, 0.24, -0.26, -0.21}
61 : {-0.0, 0.4, 0.3, -0.3, 0.5, 0.2, -0.5, -0.8, -0.2, -0.7, 1.2, 1.0, 1.5, 2.0, 2.5, -2.4, -2.9, -0.4, -0.9, -1.4, -1.9, 0.6, 1.6, 1.1, -1.3, -1.8, 2.1, 2.6, 0.7, -2.3, 1.7, -2.2, -2.7, -2.8, 0.1, -0.1, -1.7, -1.2, 2.8, 2.2, 2.3, 2.7, 0.8, -3.1, 1.3, 1.8, -2.1, -2.6, -1.1, -1.6, 1.4, 1.9, -0.6, 0.9, 2.4, 2.9, -2.0, -2.5, -3.0, -1.0, -1.5}
###Markdown
Learn binarized policyNow let's train a policy that uses binarized state space.__Tips:__* Note that increasing the number of digits for one dimension of the observations increases your state space by a factor of $10$.* If your binarization is too fine-grained, your agent will take much longer than 10000 steps to converge. You can either increase the number of iterations and reduce epsilon decay or change binarization. In practice we found that this kind of mistake is rather frequent.* If your binarization is too coarse, your agent may fail to find the optimal policy. In practice we found that on this particular environment this kind of mistake is rare.* **Start with a coarse binarization** and make it more fine-grained if that seems necessary.* Having $10^3$–$10^4$ distinct states is recommended (`len(agent._qvalues)`), but not required.* If things don't work without annealing $\varepsilon$, consider adding that, but make sure that it doesn't go to zero too quickly.A reasonable agent should attain an average reward of at least 50.
###Code
import pandas as pd
def moving_average(x, span=100):
return pd.DataFrame({'x': np.asarray(x)}).x.ewm(span=span).mean().values
agent = QLearningAgent(
alpha=0.5, epsilon=0.25, discount=0.99,
get_legal_actions=lambda s: range(n_actions))
rewards = []
epsilons = []
for i in range(10000):
reward = play_and_train(env, agent)
rewards.append(reward)
epsilons.append(agent.epsilon)
# OPTIONAL: <YOUR CODE: adjust epsilon>
if i % 100 == 0:
rewards_ewma = moving_average(rewards)
agent.epsilon *= 0.99
clear_output(True)
plt.plot(rewards, label='rewards')
plt.plot(rewards_ewma, label='rewards ewma@100')
plt.legend()
plt.grid()
plt.title('eps = {:e}, rewards ewma@100 = {:.1f}'.format(agent.epsilon, rewards_ewma[-1]))
plt.show()
len(agent._qvalues)
###Output
_____no_output_____
###Markdown
Submit to Coursera II: Submission
###Code
submit_rewards2 = rewards.copy()
from submit import submit_qlearning
submit_qlearning(submit_rewards1, submit_rewards2, '[email protected]', 'E8NnjUX37pBhu0Uq')
###Output
Submitted to Coursera platform. See results on assignment page!
###Markdown
Q-learningThis notebook will guide you through implementation of vanilla Q-learning algorithm.You need to implement QLearningAgent (follow instructions for each method) and use it on a number of tests below.
###Code
#XVFB will be launched if you run on a server
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY"))==0:
!bash ../xvfb start
%env DISPLAY=:1
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
%%writefile qlearning.py
from collections import defaultdict
import random, math
import numpy as np
class QLearningAgent:
def __init__(self, alpha, epsilon, discount, get_legal_actions):
"""
Q-Learning Agent
based on http://inst.eecs.berkeley.edu/~cs188/sp09/pacman.html
Instance variables you have access to
- self.epsilon (exploration prob)
- self.alpha (learning rate)
- self.discount (discount rate aka gamma)
Functions you should use
- self.get_legal_actions(state) {state, hashable -> list of actions, each is hashable}
which returns legal actions for a state
- self.get_qvalue(state,action)
which returns Q(state,action)
- self.set_qvalue(state,action,value)
which sets Q(state,action) := value
!!!Important!!!
Note: please avoid using self._qValues directly.
There's a special self.get_qvalue/set_qvalue for that.
"""
self.get_legal_actions = get_legal_actions
self._qvalues = defaultdict(lambda: defaultdict(lambda: 0))
self.alpha = alpha
self.epsilon = epsilon
self.discount = discount
def get_qvalue(self, state, action):
""" Returns Q(state,action) """
return self._qvalues[state][action]
def set_qvalue(self,state,action,value):
""" Sets the Qvalue for [state,action] to the given value """
self._qvalues[state][action] = value
#---------------------START OF YOUR CODE---------------------#
def get_value(self, state):
"""
Compute your agent's estimate of V(s) using current q-values
V(s) = max_over_action Q(state,action) over possible actions.
Note: please take into account that q-values can be negative.
"""
possible_actions = self.get_legal_actions(state)
#If there are no legal actions, return 0.0
if len(possible_actions) == 0:
return 0.0
<YOUR CODE HERE>
return value
def update(self, state, action, reward, next_state):
"""
You should do your Q-Value update here:
Q(s,a) := (1 - alpha) * Q(s,a) + alpha * (r + gamma * V(s'))
"""
#agent parameters
gamma = self.discount
learning_rate = self.alpha
<YOUR CODE HERE>
self.set_qvalue(state, action, <YOUR_QVALUE>)
def get_best_action(self, state):
"""
Compute the best action to take in a state (using current q-values).
"""
possible_actions = self.get_legal_actions(state)
#If there are no legal actions, return None
if len(possible_actions) == 0:
return None
<YOUR CODE HERE>
return best_action
def get_action(self, state):
"""
Compute the action to take in the current state, including exploration.
With probability self.epsilon, we should take a random action.
otherwise - the best policy action (self.getPolicy).
Note: To pick randomly from a list, use random.choice(list).
To pick True or False with a given probablity, generate uniform number in [0, 1]
and compare it with your probability
"""
# Pick Action
possible_actions = self.get_legal_actions(state)
action = None
#If there are no legal actions, return None
if len(possible_actions) == 0:
return None
#agent parameters:
epsilon = self.epsilon
<YOUR CODE HERE>
return chosen_action
###Output
Overwriting qlearning.py
###Markdown
Try it on taxiHere we use the qlearning agent on taxi env from openai gym.You will need to insert a few agent functions here.
###Code
import gym
env = gym.make("Taxi-v2")
n_actions = env.action_space.n
from qlearning import QLearningAgent
agent = QLearningAgent(alpha=0.5, epsilon=0.25, discount=0.99,
get_legal_actions = lambda s: range(n_actions))
def play_and_train(env,agent,t_max=10**4):
"""
This function should
- run a full game, actions given by agent's e-greedy policy
- train agent using agent.update(...) whenever it is possible
- return total reward
"""
total_reward = 0.0
s = env.reset()
for t in range(t_max):
# get agent to pick action given state s.
a = <YOUR CODE>
next_s, r, done, _ = env.step(a)
# train (update) agent for state s
<YOUR CODE HERE>
s = next_s
total_reward +=r
if done: break
return total_reward
from IPython.display import clear_output
rewards = []
for i in range(1000):
rewards.append(play_and_train(env, agent))
agent.epsilon *= 0.99
if i %100 ==0:
clear_output(True)
print('eps =', agent.epsilon, 'mean reward =', np.mean(rewards[-10:]))
plt.plot(rewards)
plt.show()
###Output
eps = 2.9191091959171894e-05 mean reward = 8.5
###Markdown
Submit to Coursera I: Preparation
###Code
# from submit import submit_qlearning1
# submit_qlearning1(rewards, <EMAIL>, <TOKEN>)
submit_rewards1 = rewards.copy()
###Output
_____no_output_____
###Markdown
Binarized state spacesUse agent to train efficiently on CartPole-v0.This environment has a continuous set of possible states, so you will have to group them into bins somehow.The simplest way is to use `round(x,n_digits)` (or numpy round) to round real number to a given amount of digits.The tricky part is to get the n_digits right for each state to train effectively.Note that you don't need to convert state to integers, but to __tuples__ of any kind of values.
###Code
env = gym.make("CartPole-v0")
n_actions = env.action_space.n
print("first state:%s" % (env.reset()))
plt.imshow(env.render('rgb_array'))
###Output
_____no_output_____
###Markdown
Play a few gamesWe need to estimate observation distributions. To do so, we'll play a few games and record all states.
###Code
all_states = []
for _ in range(1000):
all_states.append(env.reset())
done = False
while not done:
s, r, done, _ = env.step(env.action_space.sample())
all_states.append(s)
if done: break
all_states = np.array(all_states)
for obs_i in range(env.observation_space.shape[0]):
plt.hist(all_states[:, obs_i], bins=20)
plt.show()
###Output
_____no_output_____
###Markdown
Binarize environment
###Code
from gym.core import ObservationWrapper
class Binarizer(ObservationWrapper):
def _observation(self, state):
#state = <round state to some amount digits.>
#hint: you can do that with round(x,n_digits)
#you will need to pick a different n_digits for each dimension
return tuple(state)
env = Binarizer(gym.make("CartPole-v0"))
all_states = []
for _ in range(1000):
all_states.append(env.reset())
done = False
while not done:
s, r, done, _ = env.step(env.action_space.sample())
all_states.append(s)
if done: break
all_states = np.array(all_states)
for obs_i in range(env.observation_space.shape[0]):
plt.hist(all_states[:,obs_i],bins=20)
plt.show()
###Output
_____no_output_____
###Markdown
Learn binarized policyNow let's train a policy that uses binarized state space.__Tips:__ * If your binarization is too coarse, your agent may fail to find optimal policy. In that case, change binarization. * If your binarization is too fine-grained, your agent will take much longer than 1000 steps to converge. You can either increase number of iterations and decrease epsilon decay or change binarization.* Having 10^3 ~ 10^4 distinct states is recommended (`len(QLearningAgent._qvalues)`), but not required.
###Code
agent = QLearningAgent(alpha=0.5, epsilon=0.25, discount=0.99,
getLegalActions = lambda s: range(n_actions))
rewards = []
for i in range(1000):
rewards.append(play_and_train(env,agent))
#OPTIONAL YOUR CODE: adjust epsilon
if i %100 ==0:
clear_output(True)
print('eps =', agent.epsilon, 'mean reward =', np.mean(rewards[-10:]))
plt.plot(rewards)
plt.show()
###Output
_____no_output_____
###Markdown
Submit to Coursera II: Submission
###Code
# from submit import submit_qlearning2
# submit_qlearning2(rewards, <EMAIL>, <TOKEN>)
submit_rewards2 = rewards.copy()
from submit import submit_qlearning_all
submit_qlearning_all(submit_rewards1, submit_rewards2, <EMAIL>, <TOKEN>)
###Output
_____no_output_____
###Markdown
Q-learningThis notebook will guide you through implementation of vanilla Q-learning algorithm.You need to implement QLearningAgent (follow instructions for each method) and use it on a number of tests below.
###Code
#XVFB will be launched if you run on a server
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY"))==0:
!bash ../xvfb start
%env DISPLAY=:1
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
Q-Learning$$Q(s, a)\leftarrow \alpha\cdot \hat Q(s, a) + (1-\alpha)Q(s,a)$$$$\hat Q(s, a)=r(s, a)+\gamma\cdot max_{a'}Q(s', a')$$
###Code
%%writefile qlearning.py
from collections import defaultdict
import random, math
import numpy as np
class QLearningAgent:
def __init__(self, alpha, epsilon, discount, get_legal_actions):
"""
Q-Learning Agent
based on http://inst.eecs.berkeley.edu/~cs188/sp09/pacman.html
Instance variables you have access to
- self.epsilon (exploration prob)
- self.alpha (learning rate)
- self.discount (discount rate aka gamma)
Functions you should use
- self.get_legal_actions(state) {state, hashable -> list of actions, each is hashable}
which returns legal actions for a state
- self.get_qvalue(state,action)
which returns Q(state,action)
- self.set_qvalue(state,action,value)
which sets Q(state,action) := value
!!!Important!!!
Note: please avoid using self._qValues directly.
There's a special self.get_qvalue/set_qvalue for that.
"""
self.get_legal_actions = get_legal_actions
self._qvalues = defaultdict(lambda: defaultdict(lambda: 0))
self.alpha = alpha
self.epsilon = epsilon
self.discount = discount
def get_qvalue(self, state, action):
""" Returns Q(state,action) """
return self._qvalues[state][action]
def set_qvalue(self,state,action,value):
""" Sets the Qvalue for [state,action] to the given value """
self._qvalues[state][action] = value
#---------------------START OF YOUR CODE---------------------#
def get_value(self, state):
"""
Compute your agent's estimate of V(s) using current q-values
V(s) = max_over_action Q(state,action) over possible actions.
Note: please take into account that q-values can be negative.
"""
possible_actions = self.get_legal_actions(state)
#If there are no legal actions, return 0.0
if len(possible_actions) == 0:
return 0.0
value = max([self.get_qvalue(state, action) for action in possible_actions])
return value
def update(self, state, action, reward, next_state):
"""
You should do your Q-Value update here:
Q(s,a) := (1 - alpha) * Q(s,a) + alpha * (r + gamma * V(s'))
"""
#agent parameters
gamma = self.discount
learning_rate = self.alpha
q_value = self.get_qvalue(state, action)
V = self.get_value(next_state)
q_value = (1-learning_rate)*q_value + learning_rate*(reward + gamma*V)
self.set_qvalue(state, action, q_value)
def get_best_action(self, state):
"""
Compute the best action to take in a state (using current q-values).
"""
possible_actions = self.get_legal_actions(state)
#If there are no legal actions, return None
if len(possible_actions) == 0:
return None
q_action_dct = {action:self.get_qvalue(state, action) for action in possible_actions}
best_action = max(q_action_dct, key=q_action_dct.get)
return best_action
def get_action(self, state):
"""
Compute the action to take in the current state, including exploration.
With probability self.epsilon, we should take a random action.
otherwise - the best policy action (self.getPolicy).
Note: To pick randomly from a list, use random.choice(list).
To pick True or False with a given probablity, generate uniform number in [0, 1]
and compare it with your probability
"""
# Pick Action
possible_actions = self.get_legal_actions(state)
action = None
#If there are no legal actions, return None
if len(possible_actions) == 0:
return None
#agent parameters:
epsilon = self.epsilon
best_action = self.get_best_action(state)
p = np.random.uniform()
chosen_action = best_action if epsilon < p else np.random.choice(possible_actions)
return chosen_action
dct = {'a':1, 'b':100, 'c':4}
max(dct, key=dct.get)
###Output
_____no_output_____
###Markdown
Try it on taxiHere we use the qlearning agent on taxi env from openai gym.You will need to insert a few agent functions here.
###Code
import gym
env = gym.make("Taxi-v2")
n_actions = env.action_space.n
from qlearning import QLearningAgent
agent = QLearningAgent(alpha=0.5, epsilon=0.25, discount=0.99,
get_legal_actions = lambda s: range(n_actions))
def play_and_train(env,agent,t_max=10**4):
"""
This function should
- run a full game, actions given by agent's e-greedy policy
- train agent using agent.update(...) whenever it is possible
- return total reward
"""
total_reward = 0.0
s = env.reset()
for t in range(t_max):
# get agent to pick action given state s.
a = agent.get_action(s)
next_s, r, done, _ = env.step(a)
# train (update) agent for state s
agent.update(s, a, r, next_s)
s = next_s
total_reward +=r
if done: break
return total_reward
from IPython.display import clear_output
rewards = []
for i in range(1000):
rewards.append(play_and_train(env, agent))
agent.epsilon *= 0.99
if i %100 ==0:
clear_output(True)
print('eps =', agent.epsilon, 'mean reward =', np.mean(rewards[-10:]))
plt.plot(rewards)
plt.show()
###Output
eps = 2.9191091959171894e-05 mean reward = 8.6
###Markdown
Submit to Coursera I: Preparation
###Code
# from submit import submit_qlearning1
# submit_qlearning1(rewards, '[email protected]', 'Huu92iEvA0q4MRaR')
submit_rewards1 = rewards.copy()
###Output
_____no_output_____
###Markdown
Binarized state spacesUse agent to train efficiently on CartPole-v0.This environment has a continuous set of possible states, so you will have to group them into bins somehow.The simplest way is to use `round(x,n_digits)` (or numpy round) to round real number to a given amount of digits.The tricky part is to get the n_digits right for each state to train effectively.Note that you don't need to convert state to integers, but to __tuples__ of any kind of values.
###Code
env = gym.make("CartPole-v0")
n_actions = env.action_space.n
print("first state:%s" % (env.reset()))
plt.imshow(env.render('rgb_array'))
###Output
first state:[ 0.00597526 0.04529618 -0.03786077 -0.02361543]
###Markdown
Play a few gamesWe need to estimate observation distributions. To do so, we'll play a few games and record all states.
###Code
all_states = []
for _ in range(1000):
all_states.append(env.reset())
done = False
while not done:
s, r, done, _ = env.step(env.action_space.sample())
all_states.append(s)
if done: break
all_states = np.array(all_states)
for obs_i in range(env.observation_space.shape[0]):
plt.hist(all_states[:, obs_i], bins=20)
plt.show()
###Output
_____no_output_____
###Markdown
Binarize environment
###Code
from gym.core import ObservationWrapper
class Binarizer(ObservationWrapper):
def _observation(self, state):
#state = <round state to some amount digits.>
#hint: you can do that with round(x,n_digits)
#you will need to pick a different n_digits for each dimension
# the length of the states is 4
# the range of each state is:
# 0: -0.5~0.5
# 1: -2~2
# 2: -0.2~0.2
# 3: -3~3
state[0] = round(state[0], 0)
state[1] = round(state[1], 1)
state[2] = round(state[2], 2)
state[3] = round(state[3], 1)
return tuple(state)
round(3.23, 0)
env = Binarizer(gym.make("CartPole-v0"))
all_states = []
for _ in range(1000):
all_states.append(env.reset())
done = False
while not done:
s, r, done, _ = env.step(env.action_space.sample())
all_states.append(s)
if done: break
all_states = np.array(all_states)
for obs_i in range(env.observation_space.shape[0]):
plt.hist(all_states[:,obs_i],bins=20)
plt.show()
###Output
_____no_output_____
###Markdown
Learn binarized policyNow let's train a policy that uses binarized state space.__Tips:__ * If your binarization is too coarse, your agent may fail to find optimal policy. In that case, change binarization. * If your binarization is too fine-grained, your agent will take much longer than 1000 steps to converge. You can either increase number of iterations and decrease epsilon decay or change binarization.* Having 10^3 ~ 10^4 distinct states is recommended (`len(QLearningAgent._qvalues)`), but not required.
###Code
agent = QLearningAgent(alpha=0.5, epsilon=0.2, discount=0.99,
get_legal_actions = lambda s: range(n_actions))
rewards = []
for i in range(1000):
rewards.append(play_and_train(env,agent))
#OPTIONAL YOUR CODE: adjust epsilon
agent.epsilon *= 0.99
if i %100 ==0:
clear_output(True)
print('eps =', agent.epsilon, 'mean reward =', np.mean(rewards[-10:]))
plt.plot(rewards)
plt.show()
###Output
eps = 8.11182771568148e-23 mean reward = 16.5
###Markdown
Submit to Coursera II: Submission
###Code
# from submit import submit_qlearning2
# submit_qlearning2(rewards, <EMAIL>, <TOKEN>)
submit_rewards2 = rewards.copy()
from submit import submit_qlearning_all
submit_qlearning_all(submit_rewards1, submit_rewards2, '[email protected]', 'A1BY5s3VxpF4jaGI')
###Output
Submitted to Coursera platform. See results on assignment page!
###Markdown
Q-learningThis notebook will guide you through implementation of vanilla Q-learning algorithm.You need to implement QLearningAgent (follow instructions for each method) and use it on a number of tests below.
###Code
#XVFB will be launched if you run on a server
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY"))==0:
!bash ../xvfb start
%env DISPLAY=:1
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
%%writefile qlearning.py
from collections import defaultdict
import random, math
import numpy as np
class QLearningAgent:
def __init__(self, alpha, epsilon, discount, get_legal_actions):
"""
Q-Learning Agent
based on http://inst.eecs.berkeley.edu/~cs188/sp09/pacman.html
Instance variables you have access to
- self.epsilon (exploration prob)
- self.alpha (learning rate)
- self.discount (discount rate aka gamma)
Functions you should use
- self.get_legal_actions(state) {state, hashable -> list of actions, each is hashable}
which returns legal actions for a state
- self.get_qvalue(state,action)
which returns Q(state,action)
- self.set_qvalue(state,action,value)
which sets Q(state,action) := value
!!!Important!!!
Note: please avoid using self._qValues directly.
There's a special self.get_qvalue/set_qvalue for that.
"""
self.get_legal_actions = get_legal_actions
self._qvalues = defaultdict(lambda: defaultdict(lambda: 0))
self.alpha = alpha
self.epsilon = epsilon
self.discount = discount
def get_qvalue(self, state, action):
""" Returns Q(state,action) """
return self._qvalues[state][action]
def set_qvalue(self,state,action,value):
""" Sets the Qvalue for [state,action] to the given value """
self._qvalues[state][action] = value
#---------------------START OF YOUR CODE---------------------#
def get_value(self, state):
"""
Compute your agent's estimate of V(s) using current q-values
V(s) = max_over_action Q(state,action) over possible actions.
Note: please take into account that q-values can be negative.
"""
possible_actions = self.get_legal_actions(state)
#If there are no legal actions, return 0.0
if len(possible_actions) == 0:
return 0.0
#<YOUR CODE HERE>
value = max(self.get_qvalue(state,action) for action in possible_actions)
return value
def update(self, state, action, reward, next_state):
"""
You should do your Q-Value update here:
Q(s,a) := (1 - alpha) * Q(s,a) + alpha * (r + gamma * V(s'))
"""
#agent parameters
gamma = self.discount
learning_rate = self.alpha
#<YOUR CODE HERE>
value = (1-learning_rate) * self.get_qvalue(state,action) + learning_rate * (reward + gamma * self.get_value(next_state))
self.set_qvalue(state, action, value)
def get_best_action(self, state):
"""
Compute the best action to take in a state (using current q-values).
"""
possible_actions = self.get_legal_actions(state)
#If there are no legal actions, return None
if len(possible_actions) == 0:
return None
#<YOUR CODE HERE>
best_acton = None
best_q = float("-inf")
for action in possible_actions:
cur_q = self.get_qvalue(state,action)
if cur_q > best_q:
best_q = cur_q
best_action = action
return best_action
def get_action(self, state):
"""
Compute the action to take in the current state, including exploration.
With probability self.epsilon, we should take a random action.
otherwise - the best policy action (self.getPolicy).
Note: To pick randomly from a list, use random.choice(list).
To pick True or False with a given probablity, generate uniform number in [0, 1]
and compare it with your probability
"""
# Pick Action
possible_actions = self.get_legal_actions(state)
action = random.choice(possible_actions)
#If there are no legal actions, return None
if len(possible_actions) == 0:
return None
#agent parameters:
epsilon = self.epsilon
#<YOUR CODE HERE>
if random.random() > epsilon:
chosen_action = self.get_best_action(state)
else:
chosen_action = action
return chosen_action
###Output
Overwriting qlearning.py
###Markdown
Try it on taxiHere we use the qlearning agent on taxi env from openai gym.You will need to insert a few agent functions here.
###Code
import gym
env = gym.make("Taxi-v2")
n_actions = env.action_space.n
from qlearning import QLearningAgent
agent = QLearningAgent(alpha=0.5, epsilon=0.25, discount=0.99,
get_legal_actions = lambda s: range(n_actions))
def play_and_train(env,agent,t_max=10**4):
"""
This function should
- run a full game, actions given by agent's e-greedy policy
- train agent using agent.update(...) whenever it is possible
- return total reward
"""
total_reward = 0.0
s = env.reset()
for t in range(t_max):
# get agent to pick action given state s.
a = agent.get_action(s)#<YOUR CODE>
next_s, r, done, _ = env.step(a)
# train (update) agent for state s
#<YOUR CODE HERE>
agent.update(s, a, r, next_s)
s = next_s
total_reward +=r
if done: break
return total_reward
from IPython.display import clear_output
rewards = []
for i in range(1000):
rewards.append(play_and_train(env, agent))
agent.epsilon *= 0.99
if i %100 ==0:
clear_output(True)
print('eps =', agent.epsilon, 'mean reward =', np.mean(rewards[-10:]))
plt.plot(rewards)
plt.show()
###Output
eps = 2.9191091959171894e-05 mean reward = 7.3
###Markdown
Submit to Coursera I
###Code
#from submit import submit_qlearning1
#submit_qlearning1(rewards, <EMAIL>, <TOKEN>)
###Output
_____no_output_____
###Markdown
Binarized state spacesUse agent to train efficiently on CartPole-v0.This environment has a continuous set of possible states, so you will have to group them into bins somehow.The simplest way is to use `round(x,n_digits)` (or numpy round) to round real number to a given amount of digits.The tricky part is to get the n_digits right for each state to train effectively.Note that you don't need to convert state to integers, but to __tuples__ of any kind of values.
###Code
env = gym.make("CartPole-v0")
n_actions = env.action_space.n
print("first state:%s" % (env.reset()))
plt.imshow(env.render('rgb_array'))
###Output
first state:[ 0.01355154 0.02373539 0.02998196 -0.03050007]
###Markdown
Play a few gamesWe need to estimate observation distributions. To do so, we'll play a few games and record all states.
###Code
all_states = []
for _ in range(1000):
all_states.append(env.reset())
done = False
while not done:
s, r, done, _ = env.step(env.action_space.sample())
all_states.append(s)
if done: break
all_states = np.array(all_states)
for obs_i in range(env.observation_space.shape[0]):
plt.hist(all_states[:, obs_i], bins=20)
plt.show()
###Output
_____no_output_____
###Markdown
Binarize environment
###Code
from gym.core import ObservationWrapper
class Binarizer(ObservationWrapper):
def _observation(self, state):
#state = <round state to some amount digits.>
#hint: you can do that with round(x,n_digits)
#you will need to pick a different n_digits for each dimension
state = [round(v,dig) for v,dig in zip(state,[1,1,2,0])]
return tuple(state)
env = Binarizer(gym.make("CartPole-v0"))
all_states = []
for _ in range(1000):
all_states.append(env.reset())
done = False
while not done:
s, r, done, _ = env.step(env.action_space.sample())
all_states.append(s)
if done: break
all_states = np.array(all_states)
for obs_i in range(env.observation_space.shape[0]):
plt.hist(all_states[:,obs_i],bins=20)
plt.show()
###Output
[33mWARN: <class '__main__.Binarizer'> doesn't implement 'observation' method. Maybe it implements deprecated '_observation' method.[0m
###Markdown
Learn binarized policyNow let's train a policy that uses binarized state space.__Tips:__ * If your binarization is too coarse, your agent may fail to find optimal policy. In that case, change binarization. * If your binarization is too fine-grained, your agent will take much longer than 1000 steps to converge. You can either increase number of iterations and decrease epsilon decay or change binarization.* Having 10^3 ~ 10^4 distinct states is recommended (`len(QLearningAgent._qvalues)`), but not required.
###Code
agent = QLearningAgent(alpha=0.5, epsilon=0.25, discount=0.99,
get_legal_actions= lambda s: range(n_actions))
rewards = []
for i in range(3000):
rewards.append(play_and_train(env,agent))
#OPTIONAL YOUR CODE: adjust epsilon
if i %20 ==0:
agent.epsilon *= 0.99
clear_output(True)
print('eps =', agent.epsilon, 'mean reward =', np.mean(rewards[-10:]))
plt.plot(rewards)
plt.show()
###Output
eps = 0.055362946809715236 mean reward = 87.2
###Markdown
Submit to Coursera II
###Code
#from submit import submit_qlearning2
#submit_qlearning2(rewards, <EMAIL>, <TOKEN>)
###Output
_____no_output_____
###Markdown
Q-learningThis notebook will guide you through implementation of vanilla Q-learning algorithm.You need to implement QLearningAgent (follow instructions for each method) and use it on a number of tests below.
###Code
#XVFB will be launched if you run on a server
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
os.environ['DISPLAY'] = ':1'
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
%%writefile qlearning.py
from collections import defaultdict
import random, math
import numpy as np
class QLearningAgent:
def __init__(self, alpha, epsilon, discount, get_legal_actions):
"""
Q-Learning Agent
based on http://inst.eecs.berkeley.edu/~cs188/sp09/pacman.html
Instance variables you have access to
- self.epsilon (exploration prob)
- self.alpha (learning rate)
- self.discount (discount rate aka gamma)
Functions you should use
- self.get_legal_actions(state) {state, hashable -> list of actions, each is hashable}
which returns legal actions for a state
- self.get_qvalue(state,action)
which returns Q(state,action)
- self.set_qvalue(state,action,value)
which sets Q(state,action) := value
!!!Important!!!
Note: please avoid using self._qValues directly.
There's a special self.get_qvalue/set_qvalue for that.
"""
self.get_legal_actions = get_legal_actions
self._qvalues = defaultdict(lambda: defaultdict(lambda: 0))
self.alpha = alpha
self.epsilon = epsilon
self.discount = discount
def get_qvalue(self, state, action):
""" Returns Q(state,action) """
return self._qvalues[state][action]
def set_qvalue(self,state,action,value):
""" Sets the Qvalue for [state,action] to the given value """
self._qvalues[state][action] = value
#---------------------START OF YOUR CODE---------------------#
def get_value(self, state):
"""
Compute your agent's estimate of V(s) using current q-values
V(s) = max_over_action Q(state,action) over possible actions.
Note: please take into account that q-values can be negative.
"""
possible_actions = self.get_legal_actions(state)
#If there are no legal actions, return 0.0
if len(possible_actions) == 0:
return 0.0
values = [self.get_qvalue(state, action) for action in possible_actions]
return np.max(values)
def update(self, state, action, reward, next_state):
"""
You should do your Q-Value update here:
Q(s,a) := (1 - alpha) * Q(s,a) + alpha * (r + gamma * V(s'))
"""
#agent parameters
gamma = self.discount
learning_rate = self.alpha
qvalue = self.get_qvalue(state, action)
new_qvalue = (1 - learning_rate) * qvalue + learning_rate * (reward + gamma * self.get_value(next_state))
self.set_qvalue(state, action, new_qvalue)
def get_best_action(self, state):
"""
Compute the best action to take in a state (using current q-values).
"""
possible_actions = self.get_legal_actions(state)
#If there are no legal actions, return None
if len(possible_actions) == 0:
return None
qvalues = [self.get_qvalue(state, possible_actions[i]) for i in range(len(possible_actions))]
best_action_idx = np.argmax(qvalues)
return possible_actions[best_action_idx]
def get_action(self, state):
"""
Compute the action to take in the current state, including exploration.
With probability self.epsilon, we should take a random action.
otherwise - the best policy action (self.getPolicy).
Note: To pick randomly from a list, use random.choice(list).
To pick True or False with a given probablity, generate uniform number in [0, 1]
and compare it with your probability
"""
# Pick Action
possible_actions = self.get_legal_actions(state)
action = np.random.choice(possible_actions)
#If there are no legal actions, return None
if len(possible_actions) == 0:
return None
#agent parameters:
epsilon = self.epsilon
prob = np.random.uniform(low = 0, high = 1)
if epsilon >= prob:
chosen_action = action
else:
chosen_action = self.get_best_action(state)
return chosen_action
###Output
UsageError: Line magic function `%%writefile` not found.
###Markdown
Try it on taxiHere we use the qlearning agent on taxi env from openai gym.You will need to insert a few agent functions here.
###Code
import gym
env = gym.make("Taxi-v2")
n_actions = env.action_space.n
from qlearning import QLearningAgent
agent = QLearningAgent(alpha=0.5, epsilon=0.25, discount=0.99,
get_legal_actions = lambda s: range(n_actions))
def play_and_train(env,agent,t_max=10**4):
"""
This function should
- run a full game, actions given by agent's e-greedy policy
- train agent using agent.update(...) whenever it is possible
- return total reward
"""
total_reward = 0.0
s = env.reset()
for t in range(t_max):
# get agent to pick action given state s.
a = agent.get_action(s)
next_s, r, done, _ = env.step(a)
# train (update) agent for state s
agent.update(s, a, r, next_s)
s = next_s
total_reward +=r
if done: break
return total_reward
from IPython.display import clear_output
rewards = []
for i in range(1000):
rewards.append(play_and_train(env, agent))
agent.epsilon *= 0.99
if i %100 ==0:
clear_output(True)
print('eps =', agent.epsilon, 'mean reward =', np.mean(rewards[-10:]))
plt.plot(rewards)
plt.show()
###Output
eps = 1.260215853156675e-09 mean reward = -2000.0
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
[-2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0, -2000.0]
###Markdown
Submit to Coursera I: Preparation
###Code
submit_rewards1 = rewards.copy()
###Output
_____no_output_____
###Markdown
Binarized state spacesUse agent to train efficiently on CartPole-v0.This environment has a continuous set of possible states, so you will have to group them into bins somehow.The simplest way is to use `round(x,n_digits)` (or numpy round) to round real number to a given amount of digits.The tricky part is to get the n_digits right for each state to train effectively.Note that you don't need to convert state to integers, but to __tuples__ of any kind of values.
###Code
env = gym.make("CartPole-v0")
n_actions = env.action_space.n
print("first state:%s" % (env.reset()))
plt.imshow(env.render('rgb_array'))
###Output
_____no_output_____
###Markdown
Play a few gamesWe need to estimate observation distributions. To do so, we'll play a few games and record all states.
###Code
all_states = []
for _ in range(1000):
all_states.append(env.reset())
done = False
while not done:
s, r, done, _ = env.step(env.action_space.sample())
all_states.append(s)
if done: break
all_states = np.array(all_states)
for obs_i in range(env.observation_space.shape[0]):
plt.hist(all_states[:, obs_i], bins=20)
plt.show()
###Output
_____no_output_____
###Markdown
Binarize environment
###Code
from gym.core import ObservationWrapper
class Binarizer(ObservationWrapper):
def observation(self, state):
#state = <round state to some amount digits.>
#hint: you can do that with round(x,n_digits)
#you will need to pick a different n_digits for each dimension
return tuple(state)
env = Binarizer(gym.make("CartPole-v0"))
all_states = []
for _ in range(1000):
all_states.append(env.reset())
done = False
while not done:
s, r, done, _ = env.step(env.action_space.sample())
all_states.append(s)
if done: break
all_states = np.array(all_states)
for obs_i in range(env.observation_space.shape[0]):
plt.hist(all_states[:,obs_i],bins=20)
plt.show()
###Output
_____no_output_____
###Markdown
Learn binarized policyNow let's train a policy that uses binarized state space.__Tips:__ * If your binarization is too coarse, your agent may fail to find optimal policy. In that case, change binarization. * If your binarization is too fine-grained, your agent will take much longer than 1000 steps to converge. You can either increase number of iterations and decrease epsilon decay or change binarization.* Having 10^3 ~ 10^4 distinct states is recommended (`len(QLearningAgent._qvalues)`), but not required.
###Code
agent = QLearningAgent(alpha=0.5, epsilon=0.25, discount=0.99,
get_legal_actions=lambda s: range(n_actions))
rewards = []
for i in range(1000):
rewards.append(play_and_train(env,agent))
#OPTIONAL YOUR CODE: adjust epsilon
if i %100 ==0:
clear_output(True)
print('eps =', agent.epsilon, 'mean reward =', np.mean(rewards[-10:]))
plt.plot(rewards)
plt.show()
###Output
_____no_output_____
###Markdown
Submit to Coursera II: Submission
###Code
submit_rewards2 = rewards.copy()
from submit import submit_qlearning
submit_qlearning(submit_rewards1, submit_rewards2, <EMAIL>, <TOKEN>)
###Output
_____no_output_____ |
Image classification using SVM's.ipynb | ###Markdown
Importing modules
###Code
import numpy as np
import os
from pathlib import Path
from keras.preprocessing import image
from matplotlib import pyplot as plt
labels_dict = {'cat':0,'dog':1,'horse':2,'human':3}
dirs = os.listdir('Images1')
dirs
###Output
_____no_output_____
###Markdown
Creating a features array and labels array for all the images
###Code
p = Path('Images1')
dirs = p.glob('*')
labels_dict = {'cat':0,'dog':1,'horse':2,'human':3}
images_data = []
labels = []
for folder_dir in dirs:
label = str(folder_dir).split('\\')[-1][:-1]
for img_path in folder_dir.glob('*.jpg'):
img = image.load_img(img_path,target_size = (32,32))
img_array = image.img_to_array(img)
images_data.append(img_array)
labels.append(labels_dict[label])
images_data = np.array(images_data,dtype='float32')/255.0
labels = np.array(labels)
print(images_data.shape)
print(labels.shape)
import random
data = list(zip(images_data,labels))
random.shuffle(data)
images_data[:],labels[:] = zip(*data)
###Output
_____no_output_____
###Markdown
Visualising some random images
###Code
def plotimg(img):
from matplotlib import pyplot as plt
plt.imshow(img)
plt.axis('off')
plt.show()
return
for i in range(10):
plotimg(images_data[i])
images_data = images_data.reshape(images_data.shape[0],-1)
images_data.shape
###Output
_____no_output_____
###Markdown
SVM classifier
###Code
class MySVM():
def __init__(self,C=1.0):
self.C = C
self.W = 0
self.b = 0
def hingeLoss(self,W,b,X,Y):
loss = 0.0
loss += .5*np.dot(W,W.T)
m = X.shape[0]
for i in range(m):
ti = Y[i]*(np.dot(W,X[i].T)+b)
loss += self.C *max(0,(1-ti))
return loss[0][0]
def fit(self,X,Y,batch_size=32,learning_rate=0.0001,maxItr=100):
no_of_features = X.shape[1]
no_of_samples = X.shape[0]
n = learning_rate
c = self.C
#Init the model parameters
W = np.zeros((1,no_of_features))
bias = 0
losses = []
for i in range(maxItr):
#Training Loop
l = self.hingeLoss(W,bias,X,Y)
losses.append(l)
ids = np.arange(no_of_samples)
np.random.shuffle(ids)
#Batch Gradient Descent(Paper) with random shuffling
for batch_start in range(0,no_of_samples,batch_size):
#Assume 0 gradient for the batch
gradw = 0
gradb = 0
#Iterate over all examples in the mini batch
for j in range(batch_start,batch_start+batch_size):
if j<no_of_samples:
i = ids[j]
ti = Y[i]*(np.dot(W,X[i].T)+bias)
if ti>1:
gradw += 0
gradb += 0
else:
gradw += c*Y[i]*X[i]
gradb += c*Y[i]
#Gradient for the batch is ready! Update W,B
W = W - n*W + n*gradw
bias = bias + n*gradb
self.W = W
self.b = bias
return W,bias,losses
classes = np.unique(labels)
classes
def class_wisedata(x,y):
data = {}
for i in range(len(classes)):
data[i] = []
for i in range(x.shape[0]):
data[y[i]].append(x[i])
for k in data.keys():
data[k] = np.array(data[k])
return data
data = class_wisedata(images_data,labels)
print(data[0].shape)
print(data[1].shape)
print(data[2].shape)
print(data[3].shape)
###Output
(202, 3072)
(202, 3072)
(202, 3072)
(202, 3072)
###Markdown
For one vs one claasification if we have n classes then we require nc2 classifiers.So in our case we have 4 classes therefore we need 6 classifiers.
###Code
#Function for getting data for two classes at a time.
def pair_data(d1,d2):
l1 = d1.shape[0]
l2 = d2.shape[0]
samples = l1+l2
features = d1.shape[1]
data_pair = np.zeros((samples,features))
data_labels = np.zeros((samples))
data_pair[:l1,:] = d1
data_pair[l1:,:] = d2
data_labels[:l1] = -1
data_labels[l1:] = 1
return data_pair,data_labels
###Output
_____no_output_____
###Markdown
Training nc2 SVM classifiers
###Code
svm = MySVM()
xp, yp = pair_data(data[0], data[1])
w,b,loss = svm.fit(xp,yp,learning_rate=0.0001,maxItr=1000)
plt.plot(loss)
def train_svms(X,Y):
svm_classifiers = {}
for i in range(len(classes)):
svm_classifiers[i] = {}
for j in range(i+1,len(classes)):
svm = MySVM()
x,y = pair_data(data[i],data[j])
w,b,loss = svm.fit(x,y,learning_rate=0.00001,maxItr=1000)
svm_classifiers[i][j] = (w,b)
plt.plot(loss)
plt.show()
return svm_classifiers
svm_classifiers = train_svms(images_data,labels)
svm_classifiers
def binaryPredict(X,w,b):
z = np.dot(X,w.T)+b
if z>=0:
return 1
else:
return -1
###Output
_____no_output_____
###Markdown
Creating a predict function for making predictions
###Code
def predict(X):
counts = np.zeros((len(classes)))
for i in range(len(classes)):
for j in range(i+1,len(classes)):
w,b = svm_classifiers[i][j]
z = binaryPredict(X,w,b)
if z==-1:
counts[i]+=1
else:
counts[j]+=1
final_prediction = np.argmax(counts)
return final_prediction
r = predict(images_data[4])
r
labels[4]
###Output
_____no_output_____
###Markdown
Defining an accuracy function
###Code
def accuracy(x,y):
pred = []
count=0
for i in range(x.shape[0]):
prediction = predict(x[i])
pred.append(prediction)
if prediction==y[i]:
count += 1
return count/x.shape[0], pred
acc, ypred = accuracy(images_data, labels)
print(acc)
###Output
0.6101485148514851
|
AI for Medical Prognosis/Week3/C2M3_Assignment.ipynb | ###Markdown
Survival Estimates that Vary with TimeWelcome to the third assignment of Course 2. In this assignment, we'll use Python to build some of the statistical models we learned this past week to analyze surivival estimates for a dataset of lymphoma patients. We'll also evaluate these models and interpret their outputs. Along the way, you will be learning about the following: - Censored Data- Kaplan-Meier Estimates- Subgroup Analysis Outline- [1. Import Packages](1)- [2. Load the Dataset](2)- [3. Censored Data]() - [Exercise 1](Ex-1)- [4. Survival Estimates](4) - [Exercise 2](Ex-2) - [Exercise 3](Ex-3)- [5. Subgroup Analysis](5) - [5.1 Bonus: Log Rank Test](5-1) 1. Import PackagesWe'll first import all the packages that we need for this assignment. - `lifelines` is an open-source library for data analysis.- `numpy` is the fundamental package for scientific computing in python.- `pandas` is what we'll use to manipulate our data.- `matplotlib` is a plotting library.
###Code
import lifelines
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from util import load_data
from lifelines import KaplanMeierFitter as KM
from lifelines.statistics import logrank_test
###Output
_____no_output_____
###Markdown
2. Load the Dataset Run the next cell to load the lymphoma data set.
###Code
data = load_data()
###Output
_____no_output_____
###Markdown
As always, you first look over your data.
###Code
print("data shape: {}".format(data.shape))
data.head()
###Output
data shape: (80, 3)
###Markdown
The column `Time` states how long the patient lived before they died or were censored.The column `Event` says whether a death was observed or not. `Event` is 1 if the event is observed (i.e. the patient died) and 0 if data was censored.Censorship here means that the observation has ended without any observed event.For example, let a patient be in a hospital for 100 days at most. If a patient dies after only 44 days, their event will be recorded as `Time = 44` and `Event = 1`. If a patient walks out after 100 days and dies 3 days later (103 days total), this event is not observed in our process and the corresponding row has `Time = 100` and `Event = 0`. If a patient survives for 25 years after being admitted, their data for are still `Time = 100` and `Event = 0`. 3. Censored DataWe can plot a histogram of the survival times to see in general how long cases survived before censorship or events.
###Code
data.Time.hist();
plt.xlabel("Observation time before death or censorship (days)");
plt.ylabel("Frequency (number of patients)");
# Note that the semicolon at the end of the plotting line
# silences unnecessary textual output - try removing it
# to observe its effect
###Output
_____no_output_____
###Markdown
Exercise 1In the next cell, write a function to compute the fraction ($\in [0, 1]$) of observations which were censored. Hints Summing up the 'Event' column will give you the number of observations where censorship has NOT occurred.
###Code
# UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def frac_censored(df):
"""
Return percent of observations which were censored.
Args:
df (dataframe): dataframe which contains column 'Event' which is
1 if an event occurred (death)
0 if the event did not occur (censored)
Returns:
frac_censored (float): fraction of cases which were censored.
"""
result = sum(df['Event']==0)/df.shape[0]
### START CODE HERE ###
### END CODE HERE ###
return result
print(frac_censored(data))
###Output
0.325
###Markdown
Expected Output:```CPP0.325``` Run the next cell to see the distributions of survival times for censored and uncensored examples.
###Code
df_censored = data[data.Event == 0]
df_uncensored = data[data.Event == 1]
df_censored.Time.hist()
plt.title("Censored")
plt.xlabel("Time (days)")
plt.ylabel("Frequency")
plt.show()
df_uncensored.Time.hist()
plt.title("Uncensored")
plt.xlabel("Time (days)")
plt.ylabel("Frequency")
plt.show()
###Output
_____no_output_____
###Markdown
4. Survival EstimatesWe'll now try to estimate the survival function:$$S(t) = P(T > t)$$To illustrate the strengths of Kaplan Meier, we'll start with a naive estimator of the above survival function. To estimate this quantity, we'll divide the number of people who we know lived past time $t$ by the number of people who were not censored before $t$.Formally, let $i$ = 1, ..., $n$ be the cases, and let $t_i$ be the time when $i$ was censored or an event happened. Let $e_i= 1$ if an event was observed for $i$ and 0 otherwise. Then let $X_t = \{i : T_i > t\}$, and let $M_t = \{i : e_i = 1 \text{ or } T_i > t\}$. The estimator you will compute will be:$$\hat{S}(t) = \frac{|X_t|}{|M_t|}$$ Exercise 2Write a function to compute this estimate for arbitrary $t$ in the cell below.
###Code
# UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def naive_estimator(t, df):
"""
Return naive estimate for S(t), the probability
of surviving past time t. Given by number
of cases who survived past time t divided by the
number of cases who weren't censored before time t.
Args:
t (int): query time
df (dataframe): survival data. Has a Time column,
which says how long until that case
experienced an event or was censored,
and an Event column, which is 1 if an event
was observed and 0 otherwise.
Returns:
S_t (float): estimator for survival function evaluated at t.
"""
S_t = 0.0
X_t = sum(df['Time'] > t)
M_t = sum( (df['Time'] > t) | (df['Event'] == 1) )
S_t = X_t / M_t
### START CODE HERE ###
### END CODE HERE ###
return S_t
print("Test Cases")
sample_df = pd.DataFrame(columns = ["Time", "Event"])
sample_df.Time = [5, 10, 15]
sample_df.Event = [0, 1, 0]
print("Sample dataframe for testing code:")
print(sample_df)
print("\n")
print("Test Case 1: S(3)")
print("Output: {}, Expected: {}\n".format(naive_estimator(3, sample_df), 1.0))
print("Test Case 2: S(12)")
print("Output: {}, Expected: {}\n".format(naive_estimator(12, sample_df), 0.5))
print("Test Case 3: S(20)")
print("Output: {}, Expected: {}\n".format(naive_estimator(20, sample_df), 0.0))
# Test case 4
sample_df = pd.DataFrame({'Time': [5,5,10],
'Event': [0,1,0]
})
print("Test case 4: S(5)")
print(f"Output: {naive_estimator(5, sample_df)}, Expected: 0.5")
###Output
Test Cases
Sample dataframe for testing code:
Time Event
0 5 0
1 10 1
2 15 0
Test Case 1: S(3)
Output: 1.0, Expected: 1.0
Test Case 2: S(12)
Output: 0.5, Expected: 0.5
Test Case 3: S(20)
Output: 0.0, Expected: 0.0
Test case 4: S(5)
Output: 0.5, Expected: 0.5
###Markdown
In the next cell, we will plot the naive estimator using the real data up to the maximum time in the dataset.
###Code
max_time = data.Time.max()
x = range(0, max_time+1)
y = np.zeros(len(x))
for i, t in enumerate(x):
y[i] = naive_estimator(t, data)
plt.plot(x, y)
plt.title("Naive Survival Estimate")
plt.xlabel("Time")
plt.ylabel("Estimated cumulative survival rate")
plt.show()
###Output
_____no_output_____
###Markdown
Exercise 3Next let's compare this with the Kaplan Meier estimate. In the cell below, write a function that computes the Kaplan Meier estimate of $S(t)$ at every distinct time in the dataset. Recall the Kaplan-Meier estimate:$$S(t) = \prod_{t_i \leq t} (1 - \frac{d_i}{n_i})$$where $t_i$ are the events observed in the dataset and $d_i$ is the number of deaths at time $t_i$ and $n_i$ is the number of people who we know have survived up to time $t_i$. Hints Try sorting by Time. Use pandas.Series.unique If you get a division by zero error, please double-check how you calculated `n_t`
###Code
# UNQ_C3 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def HomemadeKM(df):
"""
Return KM estimate evaluated at every distinct
time (event or censored) recorded in the dataset.
Event times and probabilities should begin with
time 0 and probability 1.
Example:
input:
Time Censor
0 5 0
1 10 1
2 15 0
correct output:
event_times: [0, 5, 10, 15]
S: [1.0, 1.0, 0.5, 0.5]
Args:
df (dataframe): dataframe which has columns for Time
and Event, defined as usual.
Returns:
event_times (list of ints): array of unique event times
(begins with 0).
S (list of floats): array of survival probabilites, so that
S[i] = P(T > event_times[i]). This
begins with 1.0 (since no one dies at time
0).
"""
# individuals are considered to have survival probability 1
# at time 0
event_times = [0]
p = 1.0
S = [p]
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# get collection of unique observed event times
observed_event_times = df.Time.unique()
# sort event times
observed_event_times = sorted(observed_event_times)
# iterate through event times
for t in observed_event_times:
# compute n_t, number of people who survive to time t
n_t = len(df[df.Time >= t])
# compute d_t, number of people who die at time t
d_t = len(df[(df.Time == t) & (df.Event == 1)])
# update p
p = p*(1 - (float(d_t)/float(n_t)))
# update S and event_times (ADD code below)
# hint: use append
event_times.append(t)
S.append(p)
### END CODE HERE ###
return event_times, S
print("TEST CASES:\n")
print("Test Case 1\n")
print("Test DataFrame:")
sample_df = pd.DataFrame(columns = ["Time", "Event"])
sample_df.Time = [5, 10, 15]
sample_df.Event = [0, 1, 0]
print(sample_df.head())
print("\nOutput:")
x, y = HomemadeKM(sample_df)
print("Event times: {}, Survival Probabilities: {}".format(x, y))
print("\nExpected:")
print("Event times: [0, 5, 10, 15], Survival Probabilities: [1.0, 1.0, 0.5, 0.5]")
print("\nTest Case 2\n")
print("Test DataFrame:")
sample_df = pd.DataFrame(columns = ["Time", "Event"])
sample_df.loc[:, "Time"] = [2, 15, 12, 10, 20]
sample_df.loc[:, "Event"] = [0, 0, 1, 1, 1]
print(sample_df.head())
print("\nOutput:")
x, y = HomemadeKM(sample_df)
print("Event times: {}, Survival Probabilities: {}".format(x, y))
print("\nExpected:")
print("Event times: [0, 2, 10, 12, 15, 20], Survival Probabilities: [1.0, 1.0, 0.75, 0.5, 0.5, 0.0]")
###Output
TEST CASES:
Test Case 1
Test DataFrame:
Time Event
0 5 0
1 10 1
2 15 0
Output:
Event times: [0, 5, 10, 15], Survival Probabilities: [1.0, 1.0, 0.5, 0.5]
Expected:
Event times: [0, 5, 10, 15], Survival Probabilities: [1.0, 1.0, 0.5, 0.5]
Test Case 2
Test DataFrame:
Time Event
0 2 0
1 15 0
2 12 1
3 10 1
4 20 1
Output:
Event times: [0, 2, 10, 12, 15, 20], Survival Probabilities: [1.0, 1.0, 0.75, 0.5, 0.5, 0.0]
Expected:
Event times: [0, 2, 10, 12, 15, 20], Survival Probabilities: [1.0, 1.0, 0.75, 0.5, 0.5, 0.0]
###Markdown
Now let's plot the two against each other on the data to see the difference.
###Code
max_time = data.Time.max()
x = range(0, max_time+1)
y = np.zeros(len(x))
for i, t in enumerate(x):
y[i] = naive_estimator(t, data)
plt.plot(x, y, label="Naive")
x, y = HomemadeKM(data)
plt.step(x, y, label="Kaplan-Meier")
plt.xlabel("Time")
plt.ylabel("Survival probability estimate")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
QuestionWhat differences do you observe between the naive estimator and Kaplan-Meier estimator? Do any of our earlier explorations of the dataset help to explain these differences? 5. Subgroup AnalysisWe see that along with Time and Censor, we have a column called `Stage_group`. - A value of 1 in this column denotes a patient with stage III cancer- A value of 2 denotes stage IV. We want to compare the survival functions of these two groups.This time we'll use the `KaplanMeierFitter` class from `lifelines`. Run the next cell to fit and plot the Kaplan Meier curves for each group.
###Code
S1 = data[data.Stage_group == 1]
km1 = KM()
km1.fit(S1.loc[:, 'Time'], event_observed = S1.loc[:, 'Event'], label = 'Stage III')
S2 = data[data.Stage_group == 2]
km2 = KM()
km2.fit(S2.loc[:, "Time"], event_observed = S2.loc[:, 'Event'], label = 'Stage IV')
ax = km1.plot(ci_show=False)
km2.plot(ax = ax, ci_show=False)
plt.xlabel('time')
plt.ylabel('Survival probability estimate')
plt.savefig('two_km_curves', dpi=300)
###Output
_____no_output_____
###Markdown
Let's compare the survival functions at 90, 180, 270, and 360 days
###Code
survivals = pd.DataFrame([90, 180, 270, 360], columns = ['time'])
survivals.loc[:, 'Group 1'] = km1.survival_function_at_times(survivals['time']).values
survivals.loc[:, 'Group 2'] = km2.survival_function_at_times(survivals['time']).values
survivals
###Output
_____no_output_____
###Markdown
This makes clear the difference in survival between the Stage III and IV cancer groups in the dataset. 5.1 Bonus: Log-Rank TestTo say whether there is a statistical difference between the survival curves we can run the log-rank test. This test tells us the probability that we could observe this data if the two curves were the same. The derivation of the log-rank test is somewhat complicated, but luckily `lifelines` has a simple function to compute it. Run the next cell to compute a p-value using `lifelines.statistics.logrank_test`.
###Code
def logrank_p_value(group_1_data, group_2_data):
result = logrank_test(group_1_data.Time, group_2_data.Time,
group_1_data.Event, group_2_data.Event)
return result.p_value
logrank_p_value(S1, S2)
###Output
_____no_output_____
###Markdown
Survival Estimates that Vary with TimeWelcome to the third assignment of Course 2. In this assignment, we'll use Python to build some of the statistical models we learned this past week to analyze surivival estimates for a dataset of lymphoma patients. We'll also evaluate these models and interpret their outputs. Along the way, you will be learning about the following: - Censored Data- Kaplan-Meier Estimates- Subgroup Analysis Outline- [1. Import Packages](1)- [2. Load the Dataset](2)- [3. Censored Data]() - [Exercise 1](Ex-1)- [4. Survival Estimates](4) - [Exercise 2](Ex-2) - [Exercise 3](Ex-3)- [5. Subgroup Analysis](5) - [5.1 Bonus: Log Rank Test](5-1) 1. Import PackagesWe'll first import all the packages that we need for this assignment. - `lifelines` is an open-source library for data analysis.- `numpy` is the fundamental package for scientific computing in python.- `pandas` is what we'll use to manipulate our data.- `matplotlib` is a plotting library.
###Code
import lifelines
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from util import load_data
from lifelines import KaplanMeierFitter as KM
from lifelines.statistics import logrank_test
###Output
_____no_output_____
###Markdown
2. Load the Dataset Run the next cell to load the lymphoma data set.
###Code
data = load_data()
###Output
_____no_output_____
###Markdown
As always, you first look over your data.
###Code
print("data shape: {}".format(data.shape))
data.head()
###Output
data shape: (80, 3)
###Markdown
The column `Time` states how long the patient lived before they died or were censored.The column `Event` says whether a death was observed or not. `Event` is 1 if the event is observed (i.e. the patient died) and 0 if data was censored.Censorship here means that the observation has ended without any observed event.For example, let a patient be in a hospital for 100 days at most. If a patient dies after only 44 days, their event will be recorded as `Time = 44` and `Event = 1`. If a patient walks out after 100 days and dies 3 days later (103 days total), this event is not observed in our process and the corresponding row has `Time = 100` and `Event = 0`. If a patient survives for 25 years after being admitted, their data for are still `Time = 100` and `Event = 0`. 3. Censored DataWe can plot a histogram of the survival times to see in general how long cases survived before censorship or events.
###Code
data.Time.hist();
plt.xlabel("Observation time before death or censorship (days)");
plt.ylabel("Frequency (number of patients)");
# Note that the semicolon at the end of the plotting line
# silences unnecessary textual output - try removing it
# to observe its effect
###Output
_____no_output_____
###Markdown
Exercise 1In the next cell, write a function to compute the fraction ($\in [0, 1]$) of observations which were censored. Hints Summing up the 'Event' column will give you the number of observations where censorship has NOT occurred.
###Code
# UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def frac_censored(df):
"""
Return percent of observations which were censored.
Args:
df (dataframe): dataframe which contains column 'Event' which is
1 if an event occurred (death)
0 if the event did not occur (censored)
Returns:
frac_censored (float): fraction of cases which were censored.
"""
result = 0.0
### START CODE HERE ###
result = 1 - df["Event"].sum(axis=0) / df.shape[0]
### END CODE HERE ###
return result
print(frac_censored(data))
###Output
0.32499999999999996
###Markdown
Expected Output:```CPP0.325``` Run the next cell to see the distributions of survival times for censored and uncensored examples.
###Code
df_censored = data[data.Event == 0]
df_uncensored = data[data.Event == 1]
df_censored.Time.hist()
plt.title("Censored")
plt.xlabel("Time (days)")
plt.ylabel("Frequency")
plt.show()
df_uncensored.Time.hist()
plt.title("Uncensored")
plt.xlabel("Time (days)")
plt.ylabel("Frequency")
plt.show()
###Output
_____no_output_____
###Markdown
4. Survival EstimatesWe'll now try to estimate the survival function:$$S(t) = P(T > t)$$To illustrate the strengths of Kaplan Meier, we'll start with a naive estimator of the above survival function. To estimate this quantity, we'll divide the number of people who we know lived past time $t$ by the number of people who were not censored before $t$.Formally, let $i$ = 1, ..., $n$ be the cases, and let $t_i$ be the time when $i$ was censored or an event happened. Let $e_i= 1$ if an event was observed for $i$ and 0 otherwise. Then let $X_t = \{i : T_i > t\}$, and let $M_t = \{i : e_i = 1 \text{ or } T_i > t\}$. The estimator you will compute will be:$$\hat{S}(t) = \frac{|X_t|}{|M_t|}$$ Exercise 2Write a function to compute this estimate for arbitrary $t$ in the cell below.
###Code
# UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def naive_estimator(t, df):
"""
Return naive estimate for S(t), the probability
of surviving past time t. Given by number
of cases who survived past time t divided by the
number of cases who weren't censored before time t.
Args:
t (int): query time
df (dataframe): survival data. Has a Time column,
which says how long until that case
experienced an event or was censored,
and an Event column, which is 1 if an event
was observed and 0 otherwise.
Returns:
S_t (float): estimator for survival function evaluated at t.
"""
S_t = 0.0
### START CODE HERE ###
S_t = df[df["Time"] > t].shape[0] / df[(df["Event"] == 1) | (df["Time"] > t )].shape[0]
### END CODE HERE ###
return S_t
print("Test Cases")
sample_df = pd.DataFrame(columns = ["Time", "Event"])
sample_df.Time = [5, 10, 15]
sample_df.Event = [0, 1, 0]
print("Sample dataframe for testing code:")
print(sample_df)
print("\n")
print("Test Case 1: S(3)")
print("Output: {}, Expected: {}\n".format(naive_estimator(3, sample_df), 1.0))
print("Test Case 2: S(12)")
print("Output: {}, Expected: {}\n".format(naive_estimator(12, sample_df), 0.5))
print("Test Case 3: S(20)")
print("Output: {}, Expected: {}\n".format(naive_estimator(20, sample_df), 0.0))
# Test case 4
sample_df = pd.DataFrame({'Time': [5,5,10],
'Event': [0,1,0]
})
print("Test case 4: S(5)")
print(f"Output: {naive_estimator(5, sample_df)}, Expected: 0.5")
###Output
Test Cases
Sample dataframe for testing code:
Time Event
0 5 0
1 10 1
2 15 0
Test Case 1: S(3)
Output: 1.0, Expected: 1.0
Test Case 2: S(12)
Output: 0.5, Expected: 0.5
Test Case 3: S(20)
Output: 0.0, Expected: 0.0
Test case 4: S(5)
Output: 0.5, Expected: 0.5
###Markdown
In the next cell, we will plot the naive estimator using the real data up to the maximum time in the dataset.
###Code
max_time = data.Time.max()
x = range(0, max_time+1)
y = np.zeros(len(x))
for i, t in enumerate(x):
y[i] = naive_estimator(t, data)
plt.plot(x, y)
plt.title("Naive Survival Estimate")
plt.xlabel("Time")
plt.ylabel("Estimated cumulative survival rate")
plt.show()
###Output
_____no_output_____
###Markdown
Exercise 3Next let's compare this with the Kaplan Meier estimate. In the cell below, write a function that computes the Kaplan Meier estimate of $S(t)$ at every distinct time in the dataset. Recall the Kaplan-Meier estimate:$$S(t) = \prod_{t_i \leq t} (1 - \frac{d_i}{n_i})$$where $t_i$ are the events observed in the dataset and $d_i$ is the number of deaths at time $t_i$ and $n_i$ is the number of people who we know have survived up to time $t_i$. Hints Try sorting by Time. Use pandas.Series.unique If you get a division by zero error, please double-check how you calculated `n_t`
###Code
# UNQ_C3 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def HomemadeKM(df):
"""
Return KM estimate evaluated at every distinct
time (event or censored) recorded in the dataset.
Event times and probabilities should begin with
time 0 and probability 1.
Example:
input:
Time Censor
0 5 0
1 10 1
2 15 0
correct output:
event_times: [0, 5, 10, 15]
S: [1.0, 1.0, 0.5, 0.5]
Args:
df (dataframe): dataframe which has columns for Time
and Event, defined as usual.
Returns:
event_times (list of ints): array of unique event times
(begins with 0).
S (list of floats): array of survival probabilites, so that
S[i] = P(T > event_times[i]). This
begins with 1.0 (since no one dies at time
0).
"""
# individuals are considered to have survival probability 1
# at time 0
event_times = [0]
p = 1.0
S = [p]
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# get collection of unique observed event times
observed_event_times = df["Time"].unique().tolist()
# sort event times
observed_event_times = sorted(observed_event_times)
# iterate through event times
for t in observed_event_times:
# compute n_t, number of people who survive to time t
n_t = df[df["Time"] >= t].shape[0]
# compute d_t, number of people who die at time t
d_t = df[(df["Time"] == t) & (df["Event"] == 1)].shape[0]
# update p
p = p * (1 - d_t / n_t)
# update S and event_times (ADD code below)
# hint: use append
S.append(p)
event_times.append(t)
### END CODE HERE ###
return event_times, S
print("TEST CASES:\n")
print("Test Case 1\n")
print("Test DataFrame:")
sample_df = pd.DataFrame(columns = ["Time", "Event"])
sample_df.Time = [5, 10, 15]
sample_df.Event = [0, 1, 0]
print(sample_df.head())
print("\nOutput:")
x, y = HomemadeKM(sample_df)
print("Event times: {}, Survival Probabilities: {}".format(x, y))
print("\nExpected:")
print("Event times: [0, 5, 10, 15], Survival Probabilities: [1.0, 1.0, 0.5, 0.5]")
print("\nTest Case 2\n")
print("Test DataFrame:")
sample_df = pd.DataFrame(columns = ["Time", "Event"])
sample_df.loc[:, "Time"] = [2, 15, 12, 10, 20]
sample_df.loc[:, "Event"] = [0, 0, 1, 1, 1]
print(sample_df.head())
print("\nOutput:")
x, y = HomemadeKM(sample_df)
print("Event times: {}, Survival Probabilities: {}".format(x, y))
print("\nExpected:")
print("Event times: [0, 2, 10, 12, 15, 20], Survival Probabilities: [1.0, 1.0, 0.75, 0.5, 0.5, 0.0]")
###Output
TEST CASES:
Test Case 1
Test DataFrame:
Time Event
0 5 0
1 10 1
2 15 0
Output:
Event times: [0, 5, 10, 15], Survival Probabilities: [1.0, 1.0, 0.5, 0.5]
Expected:
Event times: [0, 5, 10, 15], Survival Probabilities: [1.0, 1.0, 0.5, 0.5]
Test Case 2
Test DataFrame:
Time Event
0 2 0
1 15 0
2 12 1
3 10 1
4 20 1
Output:
Event times: [0, 2, 10, 12, 15, 20], Survival Probabilities: [1.0, 1.0, 0.75, 0.5, 0.5, 0.0]
Expected:
Event times: [0, 2, 10, 12, 15, 20], Survival Probabilities: [1.0, 1.0, 0.75, 0.5, 0.5, 0.0]
###Markdown
Now let's plot the two against each other on the data to see the difference.
###Code
max_time = data.Time.max()
x = range(0, max_time+1)
y = np.zeros(len(x))
for i, t in enumerate(x):
y[i] = naive_estimator(t, data)
plt.plot(x, y, label="Naive")
x, y = HomemadeKM(data)
plt.step(x, y, label="Kaplan-Meier")
plt.xlabel("Time")
plt.ylabel("Survival probability estimate")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
QuestionWhat differences do you observe between the naive estimator and Kaplan-Meier estimator? Do any of our earlier explorations of the dataset help to explain these differences? 5. Subgroup AnalysisWe see that along with Time and Censor, we have a column called `Stage_group`. - A value of 1 in this column denotes a patient with stage III cancer- A value of 2 denotes stage IV. We want to compare the survival functions of these two groups.This time we'll use the `KaplanMeierFitter` class from `lifelines`. Run the next cell to fit and plot the Kaplan Meier curves for each group.
###Code
S1 = data[data.Stage_group == 1]
km1 = KM()
km1.fit(S1.loc[:, 'Time'], event_observed = S1.loc[:, 'Event'], label = 'Stage III')
S2 = data[data.Stage_group == 2]
km2 = KM()
km2.fit(S2.loc[:, "Time"], event_observed = S2.loc[:, 'Event'], label = 'Stage IV')
ax = km1.plot(ci_show=False)
km2.plot(ax = ax, ci_show=False)
plt.xlabel('time')
plt.ylabel('Survival probability estimate')
plt.savefig('two_km_curves', dpi=300)
###Output
_____no_output_____
###Markdown
Let's compare the survival functions at 90, 180, 270, and 360 days
###Code
survivals = pd.DataFrame([90, 180, 270, 360], columns = ['time'])
survivals.loc[:, 'Group 1'] = km1.survival_function_at_times(survivals['time']).values
survivals.loc[:, 'Group 2'] = km2.survival_function_at_times(survivals['time']).values
survivals
###Output
_____no_output_____
###Markdown
This makes clear the difference in survival between the Stage III and IV cancer groups in the dataset. 5.1 Bonus: Log-Rank TestTo say whether there is a statistical difference between the survival curves we can run the log-rank test. This test tells us the probability that we could observe this data if the two curves were the same. The derivation of the log-rank test is somewhat complicated, but luckily `lifelines` has a simple function to compute it. Run the next cell to compute a p-value using `lifelines.statistics.logrank_test`.
###Code
def logrank_p_value(group_1_data, group_2_data):
result = logrank_test(group_1_data.Time, group_2_data.Time,
group_1_data.Event, group_2_data.Event)
return result.p_value
logrank_p_value(S1, S2)
###Output
_____no_output_____ |
notebooks/MODIS Smoke Classifier.ipynb | ###Markdown
ObjectivesIn this notebook various approaches for classifying smoke in multispectral MODIS images are investigated. This evluation of the classification is being performed on a manually labelled dataset taken from MODIS observations over North and South America during the fire seasons of 2014. In each image pixel samples were taken from smoke and smoke free areas, with both being labelled separately. Using this labelled data we hope to generate a suitable classifier.The classifier will be applied in the generation of smoke plume masks, which in turn will be used to find conincidences between MODIS pixels and either AERONET of CALIOP observations of smoke. Using these collocated data we can then perform an evaluation of the ORAC AOD retrieval LUTS and determine which is most appropriate and provide an indication of how well it performs. Furthermore, we can potentially use the collocated AERONET observations to provide an improved LUT for smoke, which would be ideal. The initial classifier that will be tested in the random forest approach (an ensemble of decision trees). This approach has a number of benefits, one of the main ones being that it does relatively well out of the box and there are not many hyperparemters to tune in order to get a good fit. It is also rather fast to train, and apply (check this!).
###Code
# add the working code to path
import sys
sys.path.append("/Users/dnf/Projects/kcl-fire-aot/src")
import os
import pickle
import pandas as pd
import numpy as np
from pyhdf.SD import SD, SDC
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from scipy import ndimage
import config.filepaths as filepaths
import GLCM.Textures as textures
import config.sensor as sensor
%matplotlib inline
###Output
_____no_output_____
###Markdown
Data SetupFirst lets read in the dataframe containing the labelled channels, and then pare these channels down to an initial set that likely contain the most useful information. Attempted PCA and led to a significant degradation in the qaulity of the results. Also, the trained random forest is fast to apply to all channels, faster even than the PCA compute time. So no point in playing around to try and make it work.
###Code
# set region based on sensor MAKE SURE TO UPDATE SENSOR IN THE CODE (i.e. sensor.sensor file variable)
if sensor.sensor == 'goes':
region = 'Americas'
elif sensor.sensor == 'himawari':
region = 'Asia'
# the filepath on this may need hardcoding as what if we change sensor in this notebook, but have not changed
# sensor in the code!
df = pd.read_pickle('/Users/dnf/Projects/kcl-fire-aot/data/Americas/interim/classification_features.pickle')
df.head()
channels = [1,2,3,4,5,7,20,22,23,31,32,33,34,35,
'glcm_correlation', 'glcm_dissimilarity', 'glcm_variance']
X = df[channels]
y = df["smoke_flag"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
###Output
_____no_output_____
###Markdown
Model setup
###Code
# Initialize our model with 32 trees
rf = RandomForestClassifier(n_estimators=500, oob_score=True, n_jobs=3)
# Fit our model to training data
rf = rf.fit(X_train, y_train)
pickle.dump(rf, open('/Users/dnf/Projects/kcl-fire-aot/data/{0}/models/rf_model_500_trees.pickle'.format(region), 'wb'))
rf = pickle.load(open('/Users/dnf/Projects/kcl-fire-aot/data/Americas/models/rf_model_32_trees.pickle', 'rb'))
###Output
_____no_output_____
###Markdown
Model evaluation
###Code
print('Our OOB prediction of accuracy is: {oob}%'.format(oob=rf.oob_score_ * 100))
for c, imp in zip(channels, rf.feature_importances_):
print('Band {c} importance: {imp}'.format(c=c, imp=imp))
# Setup a dataframe -- just like R n_est = 32, max_d = None
df = pd.DataFrame()
df['truth'] = y_test
df['predict'] = rf.predict(X_test)
# Cross-tabulate predictions
print(pd.crosstab(df['truth'], df['predict'], margins=True))
# testing score
score = metrics.f1_score(y_test, rf.predict(X_test))
# training score
score_train = metrics.f1_score(y_train, rf.predict(X_train))
print score, score_train
pscore = metrics.accuracy_score(y_test, rf.predict(X_test))
pscore_train = metrics.accuracy_score(y_train, rf.predict(X_train))
print pscore, pscore_train
###Output
0.999528931177 0.999521738857
###Markdown
Some key outcomes: The fewer the trees the quicker the application. There is not much difference between having 32 and 200+ trees, perhaps at most a few percent, which is not important for our application, rather speed is. Hence, 32 is preferred over larger numbers of tree so that we can process the images in a more timely fashion. The key parameter is the max depth of the trees, limiting the max depth really affects the classification accuracy on the test data. So, best to let the random forest figure out the number of trees to use and not to mess around with it too much Image testLets test in on some MODIS scenes and see what outcomes we get.
###Code
def generate_textures(mod_chan_data, i):
image = mod_chan_data[i, :, :]
texture_generator = textures.CooccurenceMatrixTextures(image)
measures = []
names = ['glcm_dissimilarity', 'glcm_correlation', 'glcm_variance', 'glcm_mean']
diss = texture_generator.getDissimlarity()
print 'dis shape', diss.shape
corr, var, mean = texture_generator.getCorrVarMean()
for measure in [diss, corr, var, mean]:
measures.append(measure.flatten())
return measures, names
mod_path = '/Users/dnf/Projects/kcl-fire-aot/data/Americas/raw/modis/l1b/'
#mod_file = 'MYD021KM.A2014126.1855.006.2014127191958.hdf'
mod_file = 'MYD021KM.A2014217.2020.006.2014218152754.hdf'
#mod_file = 'MYD021KM.A2014231.1655.006.2014232153729.hdf'
#mod_file = 'MYD021KM.A2014236.1710.006.2014237152811.hdf'
#mod_file = 'MYD021KM.A2014252.1710.006.2014253145416.hdf'
#mod_file = 'MYD021KM.A2014257.2105.006.2014268131515.hdf'
path_mod_data = os.path.join(mod_path, mod_file)
modis_data = SD(path_mod_data, SDC.READ)
holding_dict = dict()
for chan_band_name, chan_data_name in zip(['Band_250M', 'Band_500M', 'Band_1KM_Emissive'],
['EV_250_Aggr1km_RefSB', 'EV_500_Aggr1km_RefSB', 'EV_1KM_Emissive']):
mod_chan_band = modis_data.select(chan_band_name).get()
mod_chan_data = modis_data.select(chan_data_name).get()
for i, band in enumerate(mod_chan_band):
if band == 3:
im_for_show = mod_chan_data[i, :, :]
print 'im shape', im_for_show.shape
# let generate GLCM texture measures for MODIS band 8
texture_measure, keys = generate_textures(mod_chan_data, i)
for i, k in enumerate(keys):
if k in holding_dict:
holding_dict[k].extend(list(texture_measure[i]))
else:
holding_dict[k] = list(texture_measure[i])
# check to see if we are working with a plume subset or an entire image
data_for_band = mod_chan_data[i, :, :]
data_for_band = data_for_band.flatten()
if band in holding_dict:
holding_dict[band].extend(list(data_for_band))
else:
holding_dict[band] = list(data_for_band)
test_df = pd.DataFrame.from_dict(holding_dict)
test_df = test_df[channels]
test_df.shape
smoke_mask = rf.predict(test_df)
smoke_mask = smoke_mask.reshape((2030, 1354))
###Output
_____no_output_____
###Markdown
Last thing - do an erosion dilation to get rid of the single pixel noise in the scene.
###Code
smoke_mask = ndimage.binary_erosion(smoke_mask)
smoke_mask = ndimage.binary_dilation(smoke_mask)
fig = plt.figure(figsize=(25,12))
plt.imshow(smoke_mask, cmap='gray', interpolation='none')
plt.savefig('smoke_mask.png', bbox_inches='tight')
plt.imshow(im_for_show, cmap='gray', interpolation='none')
plt.show()
###Output
_____no_output_____
###Markdown
References Image classification with Random Forests: http://ceholden.github.io/open-geo-tutorial/python/chapter_5_classification.htmlClassification Score on random forests: https://stats.stackexchange.com/questions/125756/classification-score-for-random-forestParameter tuning in random forests: https://stackoverflow.com/questions/36107820/how-to-tune-parameters-in-random-forest-using-scikit-learnSplitting datasets for cross validation: https://stats.stackexchange.com/questions/95797/how-to-split-the-dataset-for-cross-validation-learning-curve-and-final-evaluat
###Code
###Output
_____no_output_____ |
Pandas to Spark.ipynb | ###Markdown
前言本文主要讨论如何把pandas移植到spark, 他们的dataframe共有一些特性如操作方法和模式。pandas的灵活性比spark强, 但是经过一些改动spark基本上能完成相同的工作。同时又兼具了扩展性的优势,当然他们的语法和用法稍稍有些不同。 主要不同处: 分布式处理pandas只能单机处理, 把dataframe放进内存计算。spark是集群分布式地,可以处理的数据可以大大超出集群的内存数。 懒执行spark不执行任何`transformation`直到需要运行`action`方法,`action`一般是存储或者展示数据的操作。这种将`transformation`延后的做法可以让spark调度知道所有的执行情况,用于优化执行顺序和读取需要的数据。懒执行也是scala的特性之一。通常,在pandas我们总是和数据打交道, 而在spark,我们总是在改变产生数据的执行计划。 数据不可变scala的函数式编程通常倾向使用不可变对象, 每一个spark transformation会返回一个新的dataframe(除了一些meta info会改变) 没有索引spark是没有索引概念的. 单条数据索引不方便pandas可以快速使用索引找到数据,spark没有这个功能,因为在spark主要操作的是执行计划来展示数据, 而不是数据本身。 spark sql因为有了SQL功能的支持, spark更接近关系型数据库。 pandas和pyspark使用的一些例子
###Code
import pandas as pd
import pyspark.sql
import pyspark.sql.functions as sf
from pyspark.sql import SparkSession
###Output
_____no_output_____
###Markdown
Projectionspandas的投影可以直接通过[]操作
###Code
person_pd = pd.read_csv('data/persons.csv')
person_pd[["name", "sex", "age"]]
###Output
_____no_output_____
###Markdown
pyspark也可以直接`[]`来选取投影, 但是这是一个语法糖, 实际是用了`select`方法
###Code
spark = SparkSession.builder \
.master("local[*]") \
.config("spark.driver.memory","6G") \
.getOrCreate()
#person_pd[['age','name']]
person_sp = spark.read.option("inferSchema", True) \
.option("header", True) \
.csv('data/persons.csv')
person_sp.show()
person_sp[['age', 'name']].show()
###Output
+---+-------+
|age| name|
+---+-------+
| 23| Alice|
| 21| Bob|
| 27|Charlie|
| 24| Eve|
| 19|Frances|
| 31| George|
+---+-------+
###Markdown
简单transformation spark的`dataframe.select`实际上接受任何column对象, 一个column对象概念上是dataframe的一列。一列可以是dataframe的一列输入,也可以是一个计算结果或者多个列的transformation结果。 以改变一列为大写为例:
###Code
ret = pd.DataFrame(person_pd['name'].apply(lambda x: x.upper()))
ret
result = person_sp.select(
sf.upper(person_sp.name)
)
result.show()
###Output
+-----------+
|upper(name)|
+-----------+
| ALICE|
| BOB|
| CHARLIE|
| EVE|
| FRANCES|
| GEORGE|
+-----------+
###Markdown
给dataframe增加一列 pandas给dataframe增加一列很方便,直接给df赋值就行了。spark需要使用`withColumn`函数。
###Code
def create_salutation(row):
sex = row[0]
name = row[1]
if sex == 'male':
return 'Mr '+name
else:
return "Mrs "+name
result = person_pd.copy()
result['salutation'] = result[['sex','name']].apply(create_salutation, axis=1, result_type='expand')
result
result = person_sp.withColumn(
"salutation",
sf.concat(sf.when(person_sp.sex == 'male', "Mr ").otherwise("Mrs "), person_sp.name)
)
result.show()
###Output
+---+------+-------+------+-----------+
|age|height| name| sex| salutation|
+---+------+-------+------+-----------+
| 23| 156| Alice|female| Mrs Alice|
| 21| 181| Bob| male| Mr Bob|
| 27| 176|Charlie| male| Mr Charlie|
| 24| 167| Eve|female| Mrs Eve|
| 19| 172|Frances|female|Mrs Frances|
| 31| 191| George| male| Mr George|
+---+------+-------+------+-----------+
###Markdown
过滤
###Code
result = person_pd[person_pd['age'] > 20]
result
###Output
_____no_output_____
###Markdown
spark支持三种过滤写法
###Code
person_sp.filter(person_sp['age'] > 20).show()
person_sp[person_sp['age'] > 20].show()
person_sp.filter('age > 20').show()
###Output
+---+------+-------+------+
|age|height| name| sex|
+---+------+-------+------+
| 23| 156| Alice|female|
| 21| 181| Bob| male|
| 27| 176|Charlie| male|
| 24| 167| Eve|female|
| 31| 191| George| male|
+---+------+-------+------+
###Markdown
分组和聚合 类似sql中的`select aggregation Group by grouping`语句功能,pandas和spark都定义了一些聚合函数,如:- count- sum- avg- corr- first- last可以具体查看[PySpark Function Documentation](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlmodule-pyspark.sql.functions)
###Code
result = person_pd.groupby('sex').agg({'age': 'mean', 'height':['min', 'max']})
result
from pyspark.sql.functions import avg, min, max
result = person_sp.groupBy(person_sp.sex).agg(min(person_sp.height).alias('min height'), max(person_sp.height).alias('max height'),
avg(person_sp.age))
result.show()
person_sp.show()
###Output
+---+------+-------+------+
|age|height| name| sex|
+---+------+-------+------+
| 23| 156| Alice|female|
| 21| 181| Bob| male|
| 27| 176|Charlie| male|
| 24| 167| Eve|female|
| 19| 172|Frances|female|
| 31| 191| George| male|
+---+------+-------+------+
###Markdown
join spark也支持跨dataframe做join, 让我们加个数据作例子。
###Code
addresses = spark.read.json('data/addresses.json')
addresses_pd = addresses.toPandas()
addresses_pd
pd_join = person_pd.merge(addresses_pd, left_on=['name'], right_on=['name'])
pd_join
sp_join = person_sp.join(addresses, person_sp.name==addresses.name)
sp_join.show()
sp_join_1 = person_sp.join(addresses, on=['name'])
sp_join_1.show()
###Output
+---+------+-----+------+---------+-----+
|age|height| name| sex| city| name|
+---+------+-----+------+---------+-----+
| 23| 156|Alice|female| Hamburg|Alice|
| 21| 181| Bob| male|Frankfurt| Bob|
+---+------+-----+------+---------+-----+
+-----+---+------+------+---------+
| name|age|height| sex| city|
+-----+---+------+------+---------+
|Alice| 23| 156|female| Hamburg|
| Bob| 21| 181| male|Frankfurt|
+-----+---+------+------+---------+
###Markdown
重装dataframe pandas可以很方便地将现有的一列数据赋给一个新的列, 但是spark做起来不是很方便,需要join操作。
###Code
df = person_pd[['name', 'age']]
col = person_pd['height']
result = df.copy()
result['h2'] = col
result
df = person_sp[['name', 'age']]
col = person_sp[['name', 'height']]
result = df.join(col, on=['name'])
result.show()
###Output
+-------+---+------+
| name|age|height|
+-------+---+------+
| Alice| 23| 156|
| Bob| 21| 181|
|Charlie| 27| 176|
| Eve| 24| 167|
|Frances| 19| 172|
| George| 31| 191|
+-------+---+------+
###Markdown
前言本文主要讨论如何把pandas移植到spark, 他们的dataframe共有一些特性如操作方法和模式。pandas的灵活性比spark强, 但是经过一些改动spark基本上能完成相同的工作。同时又兼具了扩展性的优势,当然他们的语法和用法稍稍有些不同。 主要不同处: 分布式处理pandas只能单机处理, 把dataframe放进内存计算。spark是集群分布式地,可以处理的数据可以大大超出集群的内存数。 懒执行spark不执行任何`transformation`直到需要运行`action`方法,`action`一般是存储或者展示数据的操作。这种将`transformation`延后的做法可以让spark调度知道所有的执行情况,用于优化执行顺序和读取需要的数据。懒执行也是scala的特性之一。通常,在pandas我们总是和数据打交道, 而在spark,我们总是在改变产生数据的执行计划。 数据不可变scala的函数式编程通常倾向使用不可变对象, 每一个spark transformation会返回一个新的dataframe(除了一些meta info会改变) 没有索引spark是没有索引概念的. 单条数据索引不方便pandas可以快速使用索引找到数据,spark没有这个功能,因为在spark主要操作的是执行计划来展示数据, 而不是数据本身。 spark sql因为有了SQL功能的支持, spark更接近关系型数据库。 pandas和pyspark使用的一些例子
###Code
import pandas as pd
import pyspark.sql
import pyspark.sql.functions as sf
from pyspark.sql import SparkSession
###Output
_____no_output_____
###Markdown
Projectionspandas的投影可以直接通过[]操作
###Code
person_pd = pd.read_csv('data/persons.csv')
person_pd[["name", "sex", "age"]]
###Output
_____no_output_____
###Markdown
pyspark也可以直接`[]`来选取投影, 但是这是一个语法糖, 实际是用了`select`方法
###Code
spark = SparkSession.builder \
.master("local[*]") \
.config("spark.driver.memory","6G") \
.getOrCreate()
#person_pd[['age','name']]
person_sp = spark.read.option("inferSchema", True) \
.option("header", True) \
.csv('data/persons.csv')
person_sp.show()
person_sp[['age', 'name']].show()
###Output
+---+-------+
|age| name|
+---+-------+
| 23| Alice|
| 21| Bob|
| 27|Charlie|
| 24| Eve|
| 19|Frances|
| 31| George|
+---+-------+
###Markdown
简单transformation spark的`dataframe.select`实际上接受任何column对象, 一个column对象概念上是dataframe的一列。一列可以是dataframe的一列输入,也可以是一个计算结果或者多个列的transformation结果。 以改变一列为大写为例:
###Code
ret = pd.DataFrame(person_pd['name'].apply(lambda x: x.upper()))
ret
result = person_sp.select(
sf.upper(person_sp.name)
)
result.show()
###Output
+-----------+
|upper(name)|
+-----------+
| ALICE|
| BOB|
| CHARLIE|
| EVE|
| FRANCES|
| GEORGE|
+-----------+
###Markdown
给dataframe增加一列 pandas给dataframe增加一列很方便,直接给df赋值就行了。spark需要使用`withColumn`函数。
###Code
def create_salutation(row):
sex = row[0]
name = row[1]
if sex == 'male':
return 'Mr '+name
else:
return "Mrs "+name
result = person_pd.copy()
result['salutation'] = result[['sex','name']].apply(create_salutation, axis=1, result_type='expand')
result
result = person_sp.withColumn(
"salutation",
sf.concat(sf.when(person_sp.sex == 'male', "Mr ").otherwise("Mrs "), person_sp.name)
)
result.show()
###Output
+---+------+-------+------+-----------+
|age|height| name| sex| salutation|
+---+------+-------+------+-----------+
| 23| 156| Alice|female| Mrs Alice|
| 21| 181| Bob| male| Mr Bob|
| 27| 176|Charlie| male| Mr Charlie|
| 24| 167| Eve|female| Mrs Eve|
| 19| 172|Frances|female|Mrs Frances|
| 31| 191| George| male| Mr George|
+---+------+-------+------+-----------+
###Markdown
过滤
###Code
result = person_pd[person_pd['age'] > 20]
result
###Output
_____no_output_____
###Markdown
spark支持三种过滤写法
###Code
person_sp.filter(person_sp['age'] > 20).show()
person_sp[person_sp['age'] > 20].show()
person_sp.filter('age > 20').show()
###Output
+---+------+-------+------+
|age|height| name| sex|
+---+------+-------+------+
| 23| 156| Alice|female|
| 21| 181| Bob| male|
| 27| 176|Charlie| male|
| 24| 167| Eve|female|
| 31| 191| George| male|
+---+------+-------+------+
###Markdown
分组和聚合 类似sql中的`select aggregation Group by grouping`语句功能,pandas和spark都定义了一些聚合函数,如:- count- sum- avg- corr- first- last可以具体查看[PySpark Function Documentation](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlmodule-pyspark.sql.functions)
###Code
result = person_pd.groupby('sex').agg({'age': 'mean', 'height':['min', 'max']})
result
from pyspark.sql.functions import avg, min, max
result = person_sp.groupBy(person_sp.sex).agg(min(person_sp.height).alias('min height'), max(person_sp.height).alias('max height'),
avg(person_sp.age))
result.show()
person_sp.show()
###Output
+---+------+-------+------+
|age|height| name| sex|
+---+------+-------+------+
| 23| 156| Alice|female|
| 21| 181| Bob| male|
| 27| 176|Charlie| male|
| 24| 167| Eve|female|
| 19| 172|Frances|female|
| 31| 191| George| male|
+---+------+-------+------+
###Markdown
join spark也支持跨dataframe做join, 让我们加个数据作例子。
###Code
addresses = spark.read.json('data/addresses.json')
addresses_pd = addresses.toPandas()
addresses_pd
pd_join = person_pd.merge(addresses_pd, left_on=['name'], right_on=['name'])
pd_join
sp_join = person_sp.join(addresses, person_sp.name==addresses.name)
sp_join.show()
sp_join_1 = person_sp.join(addresses, on=['name'])
sp_join_1.show()
###Output
+---+------+-----+------+---------+-----+
|age|height| name| sex| city| name|
+---+------+-----+------+---------+-----+
| 23| 156|Alice|female| Hamburg|Alice|
| 21| 181| Bob| male|Frankfurt| Bob|
+---+------+-----+------+---------+-----+
+-----+---+------+------+---------+
| name|age|height| sex| city|
+-----+---+------+------+---------+
|Alice| 23| 156|female| Hamburg|
| Bob| 21| 181| male|Frankfurt|
+-----+---+------+------+---------+
###Markdown
重装dataframe pandas可以很方便地将现有的一列数据赋给一个新的列, 但是spark做起来不是很方便,需要join操作。
###Code
df = person_pd[['name', 'age']]
col = person_pd['height']
result = df.copy()
result['h2'] = col
result
df = person_sp[['name', 'age']]
col = person_sp[['name', 'height']]
result = df.join(col, on=['name'])
result.show()
###Output
+-------+---+------+
| name|age|height|
+-------+---+------+
| Alice| 23| 156|
| Bob| 21| 181|
|Charlie| 27| 176|
| Eve| 24| 167|
|Frances| 19| 172|
| George| 31| 191|
+-------+---+------+
|
Fair-SMOTE/Adult_Race.ipynb | ###Markdown
Load Dataset
###Code
## Load dataset
from sklearn import preprocessing
dataset_orig = pd.read_csv('../data/adult.data.csv')
## Drop NULL values
dataset_orig = dataset_orig.dropna()
## Drop categorical features
dataset_orig = dataset_orig.drop(['workclass','fnlwgt','education','marital-status','occupation','relationship','native-country'],axis=1)
## Change symbolics to numerics
dataset_orig['sex'] = np.where(dataset_orig['sex'] == ' Male', 1, 0)
dataset_orig['race'] = np.where(dataset_orig['race'] != ' White', 0, 1)
dataset_orig['Probability'] = np.where(dataset_orig['Probability'] == ' <=50K', 0, 1)
## Discretize age
dataset_orig['age'] = np.where(dataset_orig['age'] >= 70, 70, dataset_orig['age'])
dataset_orig['age'] = np.where((dataset_orig['age'] >= 60 ) & (dataset_orig['age'] < 70), 60, dataset_orig['age'])
dataset_orig['age'] = np.where((dataset_orig['age'] >= 50 ) & (dataset_orig['age'] < 60), 50, dataset_orig['age'])
dataset_orig['age'] = np.where((dataset_orig['age'] >= 40 ) & (dataset_orig['age'] < 50), 40, dataset_orig['age'])
dataset_orig['age'] = np.where((dataset_orig['age'] >= 30 ) & (dataset_orig['age'] < 40), 30, dataset_orig['age'])
dataset_orig['age'] = np.where((dataset_orig['age'] >= 20 ) & (dataset_orig['age'] < 30), 20, dataset_orig['age'])
dataset_orig['age'] = np.where((dataset_orig['age'] >= 10 ) & (dataset_orig['age'] < 10), 10, dataset_orig['age'])
dataset_orig['age'] = np.where(dataset_orig['age'] < 10, 0, dataset_orig['age'])
protected_attribute = 'race'
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
dataset_orig = pd.DataFrame(scaler.fit_transform(dataset_orig),columns = dataset_orig.columns)
dataset_orig_train, dataset_orig_test = train_test_split(dataset_orig, test_size=0.2,shuffle = True)
# dataset_orig
###Output
_____no_output_____
###Markdown
Check original scores
###Code
X_train, y_train = dataset_orig_train.loc[:, dataset_orig_train.columns != 'Probability'], dataset_orig_train['Probability']
X_test , y_test = dataset_orig_test.loc[:, dataset_orig_test.columns != 'Probability'], dataset_orig_test['Probability']
clf = LogisticRegression(C=1.0, penalty='l2', solver='liblinear', max_iter=100) # LSR
print("recall :", measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'recall'))
print("far :",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'far'))
print("precision :", measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'precision'))
print("accuracy :",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'accuracy'))
print("F1 Score :",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'F1'))
print("aod :"+protected_attribute,measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'aod'))
print("eod :"+protected_attribute,measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'eod'))
print("SPD:",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'SPD'))
print("DI:",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'DI'))
###Output
_____no_output_____
###Markdown
Check SMOTE Scores
###Code
def apply_smote(df):
df.reset_index(drop=True,inplace=True)
cols = df.columns
smt = smote(df)
df = smt.run()
df.columns = cols
return df
# dataset_orig_train, dataset_orig_test = train_test_split(dataset_orig, test_size=0.2, random_state=0,shuffle = True)
X_train, y_train = dataset_orig_train.loc[:, dataset_orig_train.columns != 'Probability'], dataset_orig_train['Probability']
X_test , y_test = dataset_orig_test.loc[:, dataset_orig_test.columns != 'Probability'], dataset_orig_test['Probability']
train_df = X_train
train_df['Probability'] = y_train
train_df = apply_smote(train_df)
y_train = train_df.Probability
X_train = train_df.drop('Probability', axis = 1)
clf = LogisticRegression(C=1.0, penalty='l2', solver='liblinear', max_iter=100) # LSR
print("recall :", measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'recall'))
print("far :",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'far'))
print("precision :", measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'precision'))
print("accuracy :",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'accuracy'))
print("F1 Score :",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'F1'))
print("aod :"+protected_attribute,measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'aod'))
print("eod :"+protected_attribute,measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'eod'))
print("SPD:",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'SPD'))
print("DI:",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'DI'))
###Output
_____no_output_____
###Markdown
Find Class & Protected attribute Distribution
###Code
# first one is class value and second one is protected attribute value
zero_zero = len(dataset_orig_train[(dataset_orig_train['Probability'] == 0) & (dataset_orig_train[protected_attribute] == 0)])
zero_one = len(dataset_orig_train[(dataset_orig_train['Probability'] == 0) & (dataset_orig_train[protected_attribute] == 1)])
one_zero = len(dataset_orig_train[(dataset_orig_train['Probability'] == 1) & (dataset_orig_train[protected_attribute] == 0)])
one_one = len(dataset_orig_train[(dataset_orig_train['Probability'] == 1) & (dataset_orig_train[protected_attribute] == 1)])
print(zero_zero,zero_one,one_zero,one_one)
###Output
_____no_output_____
###Markdown
Sort these four
###Code
maximum = max(zero_zero,zero_one,one_zero,one_one)
if maximum == zero_zero:
print("zero_zero is maximum")
if maximum == zero_one:
print("zero_one is maximum")
if maximum == one_zero:
print("one_zero is maximum")
if maximum == one_one:
print("one_one is maximum")
zero_zero_to_be_incresed = maximum - zero_zero ## where both are 0
one_zero_to_be_incresed = maximum - one_zero ## where class is 1 attribute is 0
one_one_to_be_incresed = maximum - one_one ## where class is 1 attribute is 1
print(zero_zero_to_be_incresed,one_zero_to_be_incresed,one_one_to_be_incresed)
"""
df_zero_zero = dataset_orig_train[(dataset_orig_train['Probability'] == 0) & (dataset_orig_train[protected_attribute] == 0)]
df_one_zero = dataset_orig_train[(dataset_orig_train['Probability'] == 1) & (dataset_orig_train[protected_attribute] == 0)]
df_one_one = dataset_orig_train[(dataset_orig_train['Probability'] == 1) & (dataset_orig_train[protected_attribute] == 1)]
df_zero_zero['race'] = df_zero_zero['race'].astype(str)
df_zero_zero['sex'] = df_zero_zero['sex'].astype(str)
df_one_zero['race'] = df_one_zero['race'].astype(str)
df_one_zero['sex'] = df_one_zero['sex'].astype(str)
df_one_one['race'] = df_one_one['race'].astype(str)
df_one_one['sex'] = df_one_one['sex'].astype(str)
df_zero_zero = generate_samples(zero_zero_to_be_incresed,df_zero_zero,'Adult')
df_one_zero = generate_samples(one_zero_to_be_incresed,df_one_zero,'Adult')
df_one_one = generate_samples(one_one_to_be_incresed,df_one_one,'Adult')
"""
#print(dataset_orig_train)
ratio_mapping = {
'zero_zero': 0.40,
'zero_one': 0.10,
'one_zero': 0.35,
'one_one': 0.15
}
temp = rebalance(dataset_orig_train, 'Adult', ['race', 'sex'], protected_attribute, ratio_mapping)
###Output
c:\Users\Administrator\Desktop\Fair-SMOTE-master\DataBalance.py:30: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df_map[t][k] = df_map[t][k].astype(str)
###Markdown
Append the dataframes
###Code
"""
df = df_zero_zero.append(df_one_zero)
df = df.append(df_one_one)
df['race'] = df['race'].astype(float)
df['sex'] = df['sex'].astype(float)
df_zero_one = dataset_orig_train[(dataset_orig_train['Probability'] == 0) & (dataset_orig_train[protected_attribute] == 1)]
df = df.append(df_zero_one)
"""
df = temp
###Output
_____no_output_____
###Markdown
Check score after oversampling
###Code
X_train, y_train = df.loc[:, df.columns != 'Probability'], df['Probability']
X_test , y_test = dataset_orig_test.loc[:, dataset_orig_test.columns != 'Probability'], dataset_orig_test['Probability']
clf = LogisticRegression(C=1.0, penalty='l2', solver='liblinear', max_iter=100) # LSR
print("recall :", measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'recall'))
print("far :",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'far'))
print("precision :", measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'precision'))
print("accuracy :",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'accuracy'))
print("F1 Score :",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'F1'))
print("aod :"+protected_attribute,measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'aod'))
print("eod :"+protected_attribute,measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'eod'))
print("SPD:",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'SPD'))
print("DI:",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'DI'))
###Output
recall : 0.76
far : 0.23
precision : 0.51
accuracy : 0.77
F1 Score : 0.61
aod :race -0.03
eod :race -0.03
SPD: 0.08
DI: 0.22
###Markdown
Verification
###Code
# first one is class value and second one is protected attribute value
zero_zero = len(df[(df['Probability'] == 0) & (df[protected_attribute] == 0)])
zero_one = len(df[(df['Probability'] == 0) & (df[protected_attribute] == 1)])
one_zero = len(df[(df['Probability'] == 1) & (df[protected_attribute] == 0)])
one_one = len(df[(df['Probability'] == 1) & (df[protected_attribute] == 1)])
print(zero_zero,zero_one,one_zero,one_one)
###Output
9769 9769 9769 9769
###Markdown
Load Dataset
###Code
## Load dataset
from sklearn import preprocessing
dataset_orig = pd.read_csv('../data/adult.data.csv')
## Drop NULL values
dataset_orig = dataset_orig.dropna()
## Drop categorical features
dataset_orig = dataset_orig.drop(['workclass','fnlwgt','education','marital-status','occupation','relationship','native-country'],axis=1)
## Change symbolics to numerics
dataset_orig['sex'] = np.where(dataset_orig['sex'] == ' Male', 1, 0)
dataset_orig['race'] = np.where(dataset_orig['race'] != ' White', 0, 1)
dataset_orig['Probability'] = np.where(dataset_orig['Probability'] == ' <=50K', 0, 1)
## Discretize age
dataset_orig['age'] = np.where(dataset_orig['age'] >= 70, 70, dataset_orig['age'])
dataset_orig['age'] = np.where((dataset_orig['age'] >= 60 ) & (dataset_orig['age'] < 70), 60, dataset_orig['age'])
dataset_orig['age'] = np.where((dataset_orig['age'] >= 50 ) & (dataset_orig['age'] < 60), 50, dataset_orig['age'])
dataset_orig['age'] = np.where((dataset_orig['age'] >= 40 ) & (dataset_orig['age'] < 50), 40, dataset_orig['age'])
dataset_orig['age'] = np.where((dataset_orig['age'] >= 30 ) & (dataset_orig['age'] < 40), 30, dataset_orig['age'])
dataset_orig['age'] = np.where((dataset_orig['age'] >= 20 ) & (dataset_orig['age'] < 30), 20, dataset_orig['age'])
dataset_orig['age'] = np.where((dataset_orig['age'] >= 10 ) & (dataset_orig['age'] < 10), 10, dataset_orig['age'])
dataset_orig['age'] = np.where(dataset_orig['age'] < 10, 0, dataset_orig['age'])
protected_attribute = 'race'
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
dataset_orig = pd.DataFrame(scaler.fit_transform(dataset_orig),columns = dataset_orig.columns)
dataset_orig_train, dataset_orig_test = train_test_split(dataset_orig, test_size=0.2,shuffle = True)
# dataset_orig
###Output
_____no_output_____
###Markdown
Check original scores
###Code
X_train, y_train = dataset_orig_train.loc[:, dataset_orig_train.columns != 'Probability'], dataset_orig_train['Probability']
X_test , y_test = dataset_orig_test.loc[:, dataset_orig_test.columns != 'Probability'], dataset_orig_test['Probability']
clf = LogisticRegression(C=1.0, penalty='l2', solver='liblinear', max_iter=100) # LSR
print("recall :", measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'recall'))
print("far :",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'far'))
print("precision :", measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'precision'))
print("accuracy :",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'accuracy'))
print("F1 Score :",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'F1'))
print("aod :"+protected_attribute,measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'aod'))
print("eod :"+protected_attribute,measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'eod'))
print("SPD:",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'SPD'))
print("DI:",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'DI'))
###Output
_____no_output_____
###Markdown
Check SMOTE Scores
###Code
def apply_smote(df):
df.reset_index(drop=True,inplace=True)
cols = df.columns
smt = smote(df)
df = smt.run()
df.columns = cols
return df
# dataset_orig_train, dataset_orig_test = train_test_split(dataset_orig, test_size=0.2, random_state=0,shuffle = True)
X_train, y_train = dataset_orig_train.loc[:, dataset_orig_train.columns != 'Probability'], dataset_orig_train['Probability']
X_test , y_test = dataset_orig_test.loc[:, dataset_orig_test.columns != 'Probability'], dataset_orig_test['Probability']
train_df = X_train
train_df['Probability'] = y_train
train_df = apply_smote(train_df)
y_train = train_df.Probability
X_train = train_df.drop('Probability', axis = 1)
clf = LogisticRegression(C=1.0, penalty='l2', solver='liblinear', max_iter=100) # LSR
print("recall :", measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'recall'))
print("far :",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'far'))
print("precision :", measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'precision'))
print("accuracy :",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'accuracy'))
print("F1 Score :",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'F1'))
print("aod :"+protected_attribute,measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'aod'))
print("eod :"+protected_attribute,measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'eod'))
print("SPD:",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'SPD'))
print("DI:",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'DI'))
###Output
_____no_output_____
###Markdown
Find Class & Protected attribute Distribution
###Code
# first one is class value and second one is protected attribute value
zero_zero = len(dataset_orig_train[(dataset_orig_train['Probability'] == 0) & (dataset_orig_train[protected_attribute] == 0)])
zero_one = len(dataset_orig_train[(dataset_orig_train['Probability'] == 0) & (dataset_orig_train[protected_attribute] == 1)])
one_zero = len(dataset_orig_train[(dataset_orig_train['Probability'] == 1) & (dataset_orig_train[protected_attribute] == 0)])
one_one = len(dataset_orig_train[(dataset_orig_train['Probability'] == 1) & (dataset_orig_train[protected_attribute] == 1)])
print(zero_zero,zero_one,one_zero,one_one)
###Output
_____no_output_____
###Markdown
Sort these four
###Code
maximum = max(zero_zero,zero_one,one_zero,one_one)
if maximum == zero_zero:
print("zero_zero is maximum")
if maximum == zero_one:
print("zero_one is maximum")
if maximum == one_zero:
print("one_zero is maximum")
if maximum == one_one:
print("one_one is maximum")
zero_zero_to_be_incresed = maximum - zero_zero ## where both are 0
one_zero_to_be_incresed = maximum - one_zero ## where class is 1 attribute is 0
one_one_to_be_incresed = maximum - one_one ## where class is 1 attribute is 1
print(zero_zero_to_be_incresed,one_zero_to_be_incresed,one_one_to_be_incresed)
df_zero_zero = dataset_orig_train[(dataset_orig_train['Probability'] == 0) & (dataset_orig_train[protected_attribute] == 0)]
df_one_zero = dataset_orig_train[(dataset_orig_train['Probability'] == 1) & (dataset_orig_train[protected_attribute] == 0)]
df_one_one = dataset_orig_train[(dataset_orig_train['Probability'] == 1) & (dataset_orig_train[protected_attribute] == 1)]
df_zero_zero['race'] = df_zero_zero['race'].astype(str)
df_zero_zero['sex'] = df_zero_zero['sex'].astype(str)
df_one_zero['race'] = df_one_zero['race'].astype(str)
df_one_zero['sex'] = df_one_zero['sex'].astype(str)
df_one_one['race'] = df_one_one['race'].astype(str)
df_one_one['sex'] = df_one_one['sex'].astype(str)
df_zero_zero = generate_samples(zero_zero_to_be_incresed,df_zero_zero,'Adult')
df_one_zero = generate_samples(one_zero_to_be_incresed,df_one_zero,'Adult')
df_one_one = generate_samples(one_one_to_be_incresed,df_one_one,'Adult')
###Output
_____no_output_____
###Markdown
Append the dataframes
###Code
df = df_zero_zero.append(df_one_zero)
df = df.append(df_one_one)
df['race'] = df['race'].astype(float)
df['sex'] = df['sex'].astype(float)
df_zero_one = dataset_orig_train[(dataset_orig_train['Probability'] == 0) & (dataset_orig_train[protected_attribute] == 1)]
df = df.append(df_zero_one)
###Output
_____no_output_____
###Markdown
Check score after oversampling
###Code
X_train, y_train = df.loc[:, df.columns != 'Probability'], df['Probability']
X_test , y_test = dataset_orig_test.loc[:, dataset_orig_test.columns != 'Probability'], dataset_orig_test['Probability']
clf = LogisticRegression(C=1.0, penalty='l2', solver='liblinear', max_iter=100) # LSR
print("recall :", measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'recall'))
print("far :",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'far'))
print("precision :", measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'precision'))
print("accuracy :",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'accuracy'))
print("F1 Score :",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'F1'))
print("aod :"+protected_attribute,measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'aod'))
print("eod :"+protected_attribute,measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'eod'))
print("SPD:",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'SPD'))
print("DI:",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, protected_attribute, 'DI'))
###Output
_____no_output_____
###Markdown
Verification
###Code
# first one is class value and second one is protected attribute value
zero_zero = len(df[(df['Probability'] == 0) & (df[protected_attribute] == 0)])
zero_one = len(df[(df['Probability'] == 0) & (df[protected_attribute] == 1)])
one_zero = len(df[(df['Probability'] == 1) & (df[protected_attribute] == 0)])
one_one = len(df[(df['Probability'] == 1) & (df[protected_attribute] == 1)])
print(zero_zero,zero_one,one_zero,one_one)
###Output
_____no_output_____ |
00_CommunityLearning.ipynb | ###Markdown
###Code
###Output
_____no_output_____ |
week3_course_python_III/day5_probability_statistics/theory/json/read_json.ipynb | ###Markdown
Para escribir a un archivo un json
###Code
import requests
import json
import pandas as pd
nombre_archivo = "pepito_ataulfo.json"
mi_diccionario = {"clave": 2,
"otra_clave": 3,
515:[1,3,4,5,6,7,8,2],
"DICT":{"clave_pequena": "valor_pequeño"}}
with open(nombre_archivo, 'w+') as outfile:
json.dump(mi_diccionario, outfile, indent=4)
###Output
_____no_output_____
###Markdown
Para sobreescribir un archivo (si se requiere)
###Code
with open(nombre_archivo, 'a+', encoding='latin1') as outfile:
json.dump(mi_diccionario, outfile, indent=4)
###Output
_____no_output_____
###Markdown
Para leer un archivo json
###Code
import json
with open('data_indented.json', 'r+') as outfile:
json_data_indented_readed = json.load(outfile)
print(type(json_data_indented_readed))
json_data_indented_readed
json_data_indented_readed["members"]
jsons_dicts = []
for mini_json in json_data_indented_readed["members"]:
mini_json_readed_string = json.dumps(mini_json)
string_to_json = json.loads(mini_json_readed_string)
print(mini_json_readed_string)
print("-----------")
print(string_to_json)
jsons_dicts.append(string_to_json)
print("\n##############################\n")
print(type(mini_json_readed))
print(type(string_to_json))
# escribimos cada mini_json en archivos diferentes
for i, jsn in enumerate(jsons_dicts):
nombre_archivo = str(i) + '_final_data.json'
with open(nombre_archivo, 'w+') as outfile:
json.dump(jsn, outfile, indent=4)
###Output
_____no_output_____
###Markdown
Para escribir a un archivo un
###Code
#usamos porque tiene un gran uso en data science, este tipo de archivos para hacer diccionarios
#hemos creado un archivo para importar el diccionario en un ipynb a jsn en tiene que haber diccionarios o o listas con diccionarios
import #hay que importar primeramente
nombre_archivo = "pepito_ataulfo." #para volcar informacion en un nuevo archivo
mi_diccionario = {"clave": 2,
"otra_clave": 3,
515:[1,3,4,5,6,7,8,2],
"DICT":{"clave_pequena": "valor_pequeño"}} #ejemplo importamos esto
with open(nombre_archivo, 'w+') as outfile: #esto permite abrir un archivo en modo escritura "w+" outfile es el nombre w+ para sobreescribir
.dump(mi_diccionario, outfile, indent=4)
#se vuelva el diccionario en el archivo renombrado y le identacion 4
#al correr el codigo, en la misma ruta que se ha abierto este archivo, este archivo contiene lo que he añadido<
###Output
_____no_output_____
###Markdown
Para sobreescribir un archivo añadiedo info (si se requiere)
###Code
with open(nombre_archivo, 'a+') as outfile: # con esto se consigue escribir mas informacion con la a, se ha copiado dos veces
.dump(mi_diccionario, outfile, indent=4)
###Output
_____no_output_____
###Markdown
Para leer un archivo
###Code
import
with open('data_indented.', 'r+') as outfile: #abro este archivo con permisos de lectura r+
_data_indented_readed = .load(outfile) #cargo el archivo que es un diccionario (dentro)
print(type(_data_indented_readed)) # aqui muestra clase y abajo el diccionario importado
_data_indented_readed
_data_indented_readed["members"] #accedo a los valores de la clave members
s_dicts = []
#no podemos usar load un tipo de archivo que no se puede leer con la funcion load, para ellos hay que primero leerlo como string y luego transformarlo en diccionario
#ahora vamos a leer cada uno de los pequeños y los voy a guardar cada uno de los diccionarios pequeños en una lista
for mini_ in _data_indented_readed["members"]: #for value in recorro la lista del diccionario
mini__readed_string = .dumps(mini_) #dumps lee como string lo que sea
string_to_ = .loads(mini__readed_string) #loads convierte un string en diccionario. hay que hacerlo asi primero dump y luego loads como diccionario (esto se llama data wrangling esto es leer la informacion que sea de tal forma que pueda trabajar con ella)
print(mini__readed_string) #esto es string
print("-----------")
print(string_to_) #esto es diccionario
s_dicts.append(string_to_) #tercer paso. añadimos el diccionario a la lista "s_dicts"
print("\n##############################\n")
print(type(mini__readed_string))
print(type(string_to_))
s_dicts
#1. leemos como string 2. transformamos en diccionario 3. lo añadimos a una lista
# escribimos cada mini_ en archivos diferentes, cada minidiccionario del
for i, jsn in enumerate(s_dicts): #habiendo 3 diccionarios en la lista, va a haber 3 archivos cada uno con su posicion
nombre_archivo = str(i) + '_final_data.'
with open(nombre_archivo, 'w+') as outfile:
.dump(jsn, outfile, indent=4)
with open('0_final_data.', 'r+') as outfile: #asi cargo uno de los nuevos para poder cargarlo tienes que hacerlo asi
_0_final_data = .load(outfile)
print(_0_final_data)
#RESUMEN:
#1. vuelco la informacion de un diccionario a un , w+ (creando el archivo desde 0):
#.dup(mi_diccionario, outfile, indent=4)
#2. volcar informacion de un diccionario a un jason, a+ (añadiendo la informacion sin borrar)
# 3. dos opciones
#3.1 yo tengo un archivo .sjon en local y lo abro usando .load()
#3.2 tengo ya sea un fichero local con el formato cualquiera o en la web --> 1º lo leemos como string -->2º lo transofmramos la variable string a diccionario. si hubera alguno tipo de error al pasarlo a diccionario habria que trabajar con el string para que se pudiese trabajar como dic
###Output
_____no_output_____ |
2020-09-25-basic-object-detector.ipynb | ###Markdown
"Object detection"> "Basic image processing and simple color based object detector"- toc: false- branch: master- badges: true- comments: true- categories: [fastpages, jupyter]- image: images/some_folder/your_image.png- hide: false- search_exclude: true- metadata_key1: metadata_value1- metadata_key2: metadata_value2 In object detection, one seeks to develop algorithm that identifies a specific object in an image. Here, we'll see how to build a very simple object detector (based on color) using opencv. More sophisticated object detection algorithms are capable of identifying multiple objects in a single image. For example, one can train an object detection model to identify various types of fruits, etc. Later, we'll also see that our object detection model is not exactly perfect. Nevertheless, aim of this notebook is not to build a world-class object detector but to introduce the reader to basic computer vision and image processing. Let's start by loading some useful libraries
###Code
# A popular python library useful for working with arrays
import numpy as np
# opencv library
import cv2
# For image visualization
import matplotlib.pyplot as plt
#Plots are displayed below the code cell
%matplotlib inline
###Output
_____no_output_____
###Markdown
Let's load and inspect the dimensions of our image. Images are basically a matrix of size heigth\*width\*color channels.
###Code
fruits = cv2.imread('apple_banana.png') # cv2.method loads an image
fruits.shape
###Output
_____no_output_____
###Markdown
So, we can see that our image is 1216 by 752 pixels and it has 3 color channels. Next, we'll convert our image into the RGB color channel. RGB color space is an additive color model where we can obtain other colors by a linear combinations of red, green, and blue color. Each of the red, green and blue light levels is encoded as a number in the range from 0 to 255, with 0 denoting zero light and 255 denoting maximum light. To obtain a matrix with values ranging from 0 to 1, we'll divide by 255.
###Code
fruits = cv2.cvtColor(fruits, cv2.COLOR_BGR2RGB) # cvtColor method to convert an image from one color space to another.
fruits = fruits / 255.0
###Output
_____no_output_____
###Markdown
Finally, let's plot our image.
###Code
plt.imshow(fruits)
###Output
_____no_output_____
###Markdown
We can see that our image contains one **red** apple and one **yellow** banana. Next, we will build a very basic object detector which can pinpoint apple and banana in our image based on their colors. There are more excellent algorithms out there to do this task but that's for some other time. We start by creating two new images of the same dimensions as our original image and fill first one with the red color - to detect apple and the second one with the yellow - to detect banana.
###Code
apple_red = np.zeros(np.shape(fruits))
banana_yellow = np.zeros(np.shape(fruits))
apple_red[:,:,0] = 1 # set red channel to 1 - index 0 corresponds to red channel
banana_yellow[:,:,0:2] = 1 # set yellow channel to 1 - it can be done by filling red and blue channel with 1
fig, (ax1, ax2) = plt.subplots(1,2)
ax1.imshow(apple_red)
ax2.imshow(banana_yellow)
###Output
_____no_output_____
###Markdown
Now, we will compare the pixels between our colored and fruits images. One way is to calculate the mean-squared distance as follows:$$d_{x,y} = \sqrt{\sum_{z = 1}^{3}(R_{xyz} - F_{xyz})^2} $$where, $d_{xyz}$ is Euclidean distance between pixel values for all 3 color channels in two compared images $R$ and $F$. To implement this, we will first subtract two matrices from each other, and then take a norm of a vector. This can be easily acheived by numpy's `linalg.norm` method (Don't forget to set the axis to 2).
###Code
# Subtract matrices
diff_red = fruits - apple_red
diff_yellow = fruits - banana_yellow
# Take norm of both vectors
dist_red = np.linalg.norm(diff_red, axis=2)
dist_yellow = np.linalg.norm(diff_yellow, axis=2)
# Let's plot our matrix with values, the imshow function color-maps them.
# For apple(red) detector
plt.imshow(dist_red)
plt.colorbar()
###Output
_____no_output_____
###Markdown
One can see in the plot above that the pixels with the lowest value in the matrice are the pixels that make up the apple (see colorbar for reference). This makes sense as those pixels corresponds to the red-most pixels in the fruits image. Let's also plot the matrice for banana (yellow) detector.
###Code
# For banana (yellow) detector
plt.imshow(dist_yellow)
plt.colorbar()
###Output
_____no_output_____
###Markdown
Again we see that the pixels with the lowest value in the matrice are the pixels that make up the banana. Now in order to pinpoint apple and banana in our fruits image, we need to find the index of the matrix element with the lowest value.
###Code
ind_red = np.argmin(dist_red)
print ("red most pixel index= ", ind_red)
ind_yellow = np.argmin(dist_yellow)
print ("yellow most pixel index = ", ind_yellow)
###Output
red most pixel index= 544887
yellow most pixel index = 225109
###Markdown
In order to point the location of this index on our fruits image i.e. to pinpoint our object, we need the x,y coordinates of the index. This can be done using the np.unravel_index method.
###Code
# We will get the height and width of our fruits image
image = np.shape(fruits)[0:2]
(y_red, x_red) = np.unravel_index(ind_red, image)
(y_yellow, x_yellow) = np.unravel_index(ind_yellow, image)
###Output
_____no_output_____
###Markdown
Finally, it's time to pinpoint our objects ! Let's first pinpoint our apple.
###Code
fig, (ax1, ax2) = plt.subplots(1,2)
# Apple
ax1.scatter(x_red, y_red, c='black', s = 100, marker = 'X')
ax1.imshow(fruits)
# Banana
ax2.scatter(x_yellow, y_yellow, c='black', s = 100, marker = 'X')
ax2.imshow(fruits)
###Output
_____no_output_____ |
docs/practices/nlp/addition_rnn.ipynb | ###Markdown
使用序列到序列模型完成数字加法**作者:** [jm12138](https://github.com/jm12138) **日期:** 2021.12 **摘要:** 本示例介绍如何使用飞桨完成一个数字加法任务,将会使用飞桨提供的`LSTM`,组建一个序列到序列模型,并在随机生成的数据集上完成数字加法任务的模型训练与预测。 一、环境配置本教程基于Paddle 2.2 编写,如果你的环境不是本版本,请先参考官网[安装](https://www.paddlepaddle.org.cn/install/quick) Paddle 2.2。
###Code
# 导入项目运行所需的包
import paddle
import paddle.nn as nn
import random
import numpy as np
from visualdl import LogWriter
# 打印Paddle版本
print('paddle version: %s' % paddle.__version__)
###Output
paddle version: 2.2.1
###Markdown
二、构建数据集* 随机生成数据,并使用生成的数据构造数据集* 通过继承 ``paddle.io.Dataset`` 来完成数据集的构造
###Code
# 编码函数
def encoder(text, LEN, label_dict):
# 文本转ID
ids = [label_dict[word] for word in text]
# 对长度进行补齐
ids += [label_dict[' ']]*(LEN-len(ids))
return ids
# 单个数据生成函数
def make_data(inputs, labels, DIGITS, label_dict):
MAXLEN = DIGITS + 1 + DIGITS
# 对输入输出文本进行ID编码
inputs = encoder(inputs, MAXLEN, label_dict)
labels = encoder(labels, DIGITS + 1, label_dict)
return inputs, labels
# 批量数据生成函数
def gen_datas(DATA_NUM, MAX_NUM, DIGITS, label_dict):
datas = []
while len(datas)<DATA_NUM:
# 随机取两个数
a = random.randint(0,MAX_NUM)
b = random.randint(0,MAX_NUM)
# 生成输入文本
inputs = '%d+%d' % (a, b)
# 生成输出文本
labels = str(eval(inputs))
# 生成单个数据
inputs, labels = [np.array(_).astype('int64') for _ in make_data(inputs, labels, DIGITS, label_dict)]
datas.append([inputs, labels])
return datas
# 继承paddle.io.Dataset来构造数据集
class Addition_Dataset(paddle.io.Dataset):
# 重写数据集初始化函数
def __init__(self, datas):
super(Addition_Dataset, self).__init__()
self.datas = datas
# 重写生成样本的函数
def __getitem__(self, index):
data, label = [paddle.to_tensor(_) for _ in self.datas[index]]
return data, label
# 重写返回数据集大小的函数
def __len__(self):
return len(self.datas)
print('generating datas..')
# 定义字符表
label_dict = {
'0': 0, '1': 1, '2': 2, '3': 3,
'4': 4, '5': 5, '6': 6, '7': 7,
'8': 8, '9': 9, '+': 10, ' ': 11
}
# 输入数字最大位数
DIGITS = 2
# 数据数量
train_num = 5000
dev_num = 500
# 数据批大小
batch_size = 32
# 读取线程数
num_workers = 8
# 定义一些所需变量
MAXLEN = DIGITS + 1 + DIGITS
MAX_NUM = 10**(DIGITS)-1
# 生成数据
train_datas = gen_datas(
train_num,
MAX_NUM,
DIGITS,
label_dict
)
dev_datas = gen_datas(
dev_num,
MAX_NUM,
DIGITS,
label_dict
)
# 实例化数据集
train_dataset = Addition_Dataset(train_datas)
dev_dataset = Addition_Dataset(dev_datas)
print('making the dataset...')
# 实例化数据读取器
train_reader = paddle.io.DataLoader(
train_dataset,
batch_size=batch_size,
shuffle=True,
drop_last=True
)
dev_reader = paddle.io.DataLoader(
dev_dataset,
batch_size=batch_size,
shuffle=False,
drop_last=True
)
print('finish')
###Output
generating datas..
making the dataset...
finish
###Markdown
三、模型组网* 通过继承 ``paddle.nn.Layer`` 类来搭建模型* 本次介绍的模型是一个简单的基于 ``LSTM`` 的 ``Seq2Seq`` 模型* 一共有如下四个主要的网络层: 1. 嵌入层(``Embedding``):将输入的文本序列转为嵌入向量 2. 编码层(``LSTM``):将嵌入向量进行编码 3. 解码层(``LSTM``):将编码向量进行解码 4. 全连接层(``Linear``):对解码完成的向量进行线性映射* 损失函数为交叉熵损失函数
###Code
# 继承paddle.nn.Layer类
class Addition_Model(nn.Layer):
# 重写初始化函数
# 参数:字符表长度、嵌入层大小、隐藏层大小、解码器层数、处理数字的最大位数
def __init__(self, char_len=12, embedding_size=128, hidden_size=128, num_layers=1, DIGITS=2):
super(Addition_Model, self).__init__()
# 初始化变量
self.DIGITS = DIGITS
self.MAXLEN = DIGITS + 1 + DIGITS
self.hidden_size = hidden_size
self.char_len = char_len
# 嵌入层
self.emb = nn.Embedding(
char_len,
embedding_size
)
# 编码器
self.encoder = nn.LSTM(
input_size=embedding_size,
hidden_size=hidden_size,
num_layers=1
)
# 解码器
self.decoder = nn.LSTM(
input_size=hidden_size,
hidden_size=hidden_size,
num_layers=num_layers
)
# 全连接层
self.fc = nn.Linear(
hidden_size,
char_len
)
# 重写模型前向计算函数
# 参数:输入[None, MAXLEN]、标签[None, DIGITS + 1]
def forward(self, inputs, labels=None):
# 嵌入层
out = self.emb(inputs)
# 编码器
out, (_, _) = self.encoder(out)
# 按时间步切分编码器输出
out = paddle.split(out, self.MAXLEN, axis=1)
# 取最后一个时间步的输出并复制 DIGITS + 1 次
out = paddle.expand(out[-1], [out[-1].shape[0], self.DIGITS + 1, self.hidden_size])
# 解码器
out, (_, _) = self.decoder(out)
# 全连接
out = self.fc(out)
# 如果标签存在,则计算其损失和准确率
if labels is not None:
# 计算交叉熵损失
loss = nn.functional.cross_entropy(out, labels)
# 计算准确率
acc = paddle.metric.accuracy(paddle.reshape(out, [-1, self.char_len]), paddle.reshape(labels, [-1, 1]))
# 返回损失和准确率
return loss, acc
# 返回输出
return out
###Output
_____no_output_____
###Markdown
四、模型训练与评估* 使用 ``Adam`` 作为优化器进行模型训练* 以模型准确率作为评价指标* 使用 ``VisualDL`` 对训练数据进行可视化* 训练过程中会同时进行模型评估和最佳模型的保存
###Code
# 初始化log写入器
log_writer = LogWriter(logdir="./log")
# 模型参数设置
embedding_size = 128
hidden_size=128
num_layers=1
# 训练参数设置
epoch_num = 50
learning_rate = 0.001
log_iter = 2000
eval_iter = 500
# 定义一些所需变量
global_step = 0
log_step = 0
max_acc = 0
# 实例化模型
model = Addition_Model(
char_len=len(label_dict),
embedding_size=embedding_size,
hidden_size=hidden_size,
num_layers=num_layers,
DIGITS=DIGITS)
# 将模型设置为训练模式
model.train()
# 设置优化器,学习率,并且把模型参数给优化器
opt = paddle.optimizer.Adam(
learning_rate=learning_rate,
parameters=model.parameters()
)
# 启动训练,循环epoch_num个轮次
for epoch in range(epoch_num):
# 遍历数据集读取数据
for batch_id, data in enumerate(train_reader()):
# 读取数据
inputs, labels = data
# 模型前向计算
loss, acc = model(inputs, labels=labels)
# 打印训练数据
if global_step%log_iter==0:
print('train epoch:%d step: %d loss:%f acc:%f' % (epoch, global_step, loss.numpy(), acc.numpy()))
log_writer.add_scalar(tag="train/loss", step=log_step, value=loss.numpy())
log_writer.add_scalar(tag="train/acc", step=log_step, value=acc.numpy())
log_step+=1
# 模型验证
if global_step%eval_iter==0:
model.eval()
losses = []
accs = []
for data in dev_reader():
loss_eval, acc_eval = model(inputs, labels=labels)
losses.append(loss_eval.numpy())
accs.append(acc_eval.numpy())
avg_loss = np.concatenate(losses).mean()
avg_acc = np.concatenate(accs).mean()
print('eval epoch:%d step: %d loss:%f acc:%f' % (epoch, global_step, avg_loss, avg_acc))
log_writer.add_scalar(tag="dev/loss", step=log_step, value=avg_loss)
log_writer.add_scalar(tag="dev/acc", step=log_step, value=avg_acc)
# 保存最佳模型
if avg_acc>max_acc:
max_acc = avg_acc
print('saving the best_model...')
paddle.save(model.state_dict(), 'best_model')
model.train()
# 反向传播
loss.backward()
# 使用优化器进行参数优化
opt.step()
# 清除梯度
opt.clear_grad()
# 全局步数加一
global_step += 1
# 保存最终模型
paddle.save(model.state_dict(),'final_model')
###Output
train epoch:0 step: 0 loss:2.489843 acc:0.072917
eval epoch:0 step: 0 loss:2.489844 acc:0.072917
saving the best_model...
eval epoch:3 step: 500 loss:1.132963 acc:0.583333
saving the best_model...
eval epoch:6 step: 1000 loss:0.922499 acc:0.718750
saving the best_model...
eval epoch:9 step: 1500 loss:0.833021 acc:0.666667
train epoch:12 step: 2000 loss:0.732612 acc:0.739583
eval epoch:12 step: 2000 loss:0.732612 acc:0.739583
saving the best_model...
eval epoch:16 step: 2500 loss:0.448837 acc:0.812500
saving the best_model...
eval epoch:19 step: 3000 loss:0.225695 acc:0.947917
saving the best_model...
eval epoch:22 step: 3500 loss:0.099140 acc:0.989583
saving the best_model...
train epoch:25 step: 4000 loss:0.065642 acc:1.000000
eval epoch:25 step: 4000 loss:0.065642 acc:1.000000
saving the best_model...
eval epoch:28 step: 4500 loss:0.033392 acc:1.000000
eval epoch:32 step: 5000 loss:0.020793 acc:1.000000
eval epoch:35 step: 5500 loss:0.021470 acc:1.000000
train epoch:38 step: 6000 loss:0.015860 acc:1.000000
eval epoch:38 step: 6000 loss:0.015860 acc:1.000000
eval epoch:41 step: 6500 loss:0.008177 acc:1.000000
eval epoch:44 step: 7000 loss:0.004767 acc:1.000000
eval epoch:48 step: 7500 loss:0.003457 acc:1.000000
###Markdown
五、模型测试* 使用保存的最佳模型进行测试
###Code
# 反转字符表
label_dict_adv = {v: k for k, v in label_dict.items()}
# 输入计算题目
input_text = '12+40'
# 编码输入为ID
inputs = encoder(input_text, MAXLEN, label_dict)
# 转换输入为向量形式
inputs = np.array(inputs).reshape(-1, MAXLEN)
inputs = paddle.to_tensor(inputs)
# 加载模型
params_dict= paddle.load('best_model')
model.set_dict(params_dict)
# 设置为评估模式
model.eval()
# 模型推理
out = model(inputs)
# 结果转换
result = ''.join([label_dict_adv[_] for _ in np.argmax(out.numpy(), -1).reshape(-1)])
# 打印结果
print('the model answer: %s=%s' % (input_text, result))
print('the true answer: %s=%s' % (input_text, eval(input_text)))
###Output
the model answer: 12+40=52
the true answer: 12+40=52
###Markdown
使用序列到序列模型完成数字加法**作者:** [jm12138](https://github.com/jm12138) **日期:** 2022.5 **摘要:** 本示例介绍如何使用飞桨完成一个数字加法任务,将会使用飞桨提供的`LSTM`,组建一个序列到序列模型,并在随机生成的数据集上完成数字加法任务的模型训练与预测。 一、环境配置本教程基于PaddlePaddle 2.3.0 编写,如果你的环境不是本版本,请先参考官网[安装](https://www.paddlepaddle.org.cn/install/quick) PaddlePaddle 2.3.0。
###Code
# 导入项目运行所需的包
import paddle
import paddle.nn as nn
import random
import numpy as np
from visualdl import LogWriter
# 打印Paddle版本
print('paddle version: %s' % paddle.__version__)
###Output
paddle version: 2.3.0
###Markdown
二、构建数据集* 随机生成数据,并使用生成的数据构造数据集* 通过继承 ``paddle.io.Dataset`` 来完成数据集的构造
###Code
# 编码函数
def encoder(text, LEN, label_dict):
# 文本转ID
ids = [label_dict[word] for word in text]
# 对长度进行补齐
ids += [label_dict[' ']]*(LEN-len(ids))
return ids
# 单个数据生成函数
def make_data(inputs, labels, DIGITS, label_dict):
MAXLEN = DIGITS + 1 + DIGITS
# 对输入输出文本进行ID编码
inputs = encoder(inputs, MAXLEN, label_dict)
labels = encoder(labels, DIGITS + 1, label_dict)
return inputs, labels
# 批量数据生成函数
def gen_datas(DATA_NUM, MAX_NUM, DIGITS, label_dict):
datas = []
while len(datas)<DATA_NUM:
# 随机取两个数
a = random.randint(0,MAX_NUM)
b = random.randint(0,MAX_NUM)
# 生成输入文本
inputs = '%d+%d' % (a, b)
# 生成输出文本
labels = str(eval(inputs))
# 生成单个数据
inputs, labels = [np.array(_).astype('int64') for _ in make_data(inputs, labels, DIGITS, label_dict)]
datas.append([inputs, labels])
return datas
# 继承paddle.io.Dataset来构造数据集
class Addition_Dataset(paddle.io.Dataset):
# 重写数据集初始化函数
def __init__(self, datas):
super(Addition_Dataset, self).__init__()
self.datas = datas
# 重写生成样本的函数
def __getitem__(self, index):
data, label = [paddle.to_tensor(_) for _ in self.datas[index]]
return data, label
# 重写返回数据集大小的函数
def __len__(self):
return len(self.datas)
print('generating datas..')
# 定义字符表
label_dict = {
'0': 0, '1': 1, '2': 2, '3': 3,
'4': 4, '5': 5, '6': 6, '7': 7,
'8': 8, '9': 9, '+': 10, ' ': 11
}
# 输入数字最大位数
DIGITS = 2
# 数据数量
train_num = 5000
dev_num = 500
# 数据批大小
batch_size = 32
# 读取线程数
num_workers = 8
# 定义一些所需变量
MAXLEN = DIGITS + 1 + DIGITS
MAX_NUM = 10**(DIGITS)-1
# 生成数据
train_datas = gen_datas(
train_num,
MAX_NUM,
DIGITS,
label_dict
)
dev_datas = gen_datas(
dev_num,
MAX_NUM,
DIGITS,
label_dict
)
# 实例化数据集
train_dataset = Addition_Dataset(train_datas)
dev_dataset = Addition_Dataset(dev_datas)
print('making the dataset...')
# 实例化数据读取器
train_reader = paddle.io.DataLoader(
train_dataset,
batch_size=batch_size,
shuffle=True,
drop_last=True
)
dev_reader = paddle.io.DataLoader(
dev_dataset,
batch_size=batch_size,
shuffle=False,
drop_last=True
)
print('finish')
###Output
generating datas..
making the dataset...
finish
###Markdown
三、模型组网* 通过继承 ``paddle.nn.Layer`` 类来搭建模型* 本次介绍的模型是一个简单的基于 ``LSTM`` 的 ``Seq2Seq`` 模型* 一共有如下四个主要的网络层: 1. 嵌入层(``Embedding``):将输入的文本序列转为嵌入向量 2. 编码层(``LSTM``):将嵌入向量进行编码 3. 解码层(``LSTM``):将编码向量进行解码 4. 全连接层(``Linear``):对解码完成的向量进行线性映射* 损失函数为交叉熵损失函数
###Code
# 继承paddle.nn.Layer类
class Addition_Model(nn.Layer):
# 重写初始化函数
# 参数:字符表长度、嵌入层大小、隐藏层大小、解码器层数、处理数字的最大位数
def __init__(self, char_len=12, embedding_size=128, hidden_size=128, num_layers=1, DIGITS=2):
super(Addition_Model, self).__init__()
# 初始化变量
self.DIGITS = DIGITS
self.MAXLEN = DIGITS + 1 + DIGITS
self.hidden_size = hidden_size
self.char_len = char_len
# 嵌入层
self.emb = nn.Embedding(
char_len,
embedding_size
)
# 编码器
self.encoder = nn.LSTM(
input_size=embedding_size,
hidden_size=hidden_size,
num_layers=1
)
# 解码器
self.decoder = nn.LSTM(
input_size=hidden_size,
hidden_size=hidden_size,
num_layers=num_layers
)
# 全连接层
self.fc = nn.Linear(
hidden_size,
char_len
)
# 重写模型前向计算函数
# 参数:输入[None, MAXLEN]、标签[None, DIGITS + 1]
def forward(self, inputs, labels=None):
# 嵌入层
out = self.emb(inputs)
# 编码器
out, (_, _) = self.encoder(out)
# 按时间步切分编码器输出
out = paddle.split(out, self.MAXLEN, axis=1)
# 取最后一个时间步的输出并复制 DIGITS + 1 次
out = paddle.expand(out[-1], [out[-1].shape[0], self.DIGITS + 1, self.hidden_size])
# 解码器
out, (_, _) = self.decoder(out)
# 全连接
out = self.fc(out)
# 如果标签存在,则计算其损失和准确率
if labels is not None:
# 计算交叉熵损失
loss = nn.functional.cross_entropy(out, labels)
# 计算准确率
acc = paddle.metric.accuracy(paddle.reshape(out, [-1, self.char_len]), paddle.reshape(labels, [-1, 1]))
# 返回损失和准确率
return loss, acc
# 返回输出
return out
###Output
_____no_output_____
###Markdown
四、模型训练与评估* 使用 ``Adam`` 作为优化器进行模型训练* 以模型准确率作为评价指标* 使用 ``VisualDL`` 对训练数据进行可视化* 训练过程中会同时进行模型评估和最佳模型的保存
###Code
# 初始化log写入器
log_writer = LogWriter(logdir="./log")
# 模型参数设置
embedding_size = 128
hidden_size=128
num_layers=1
# 训练参数设置
epoch_num = 50
learning_rate = 0.001
log_iter = 2000
eval_iter = 500
# 定义一些所需变量
global_step = 0
log_step = 0
max_acc = 0
# 实例化模型
model = Addition_Model(
char_len=len(label_dict),
embedding_size=embedding_size,
hidden_size=hidden_size,
num_layers=num_layers,
DIGITS=DIGITS)
# 将模型设置为训练模式
model.train()
# 设置优化器,学习率,并且把模型参数给优化器
opt = paddle.optimizer.Adam(
learning_rate=learning_rate,
parameters=model.parameters()
)
# 启动训练,循环epoch_num个轮次
for epoch in range(epoch_num):
# 遍历数据集读取数据
for batch_id, data in enumerate(train_reader()):
# 读取数据
inputs, labels = data
# 模型前向计算
loss, acc = model(inputs, labels=labels)
# 打印训练数据
if global_step%log_iter==0:
print('train epoch:%d step: %d loss:%f acc:%f' % (epoch, global_step, loss.numpy(), acc.numpy()))
log_writer.add_scalar(tag="train/loss", step=log_step, value=loss.numpy())
log_writer.add_scalar(tag="train/acc", step=log_step, value=acc.numpy())
log_step+=1
# 模型验证
if global_step%eval_iter==0:
model.eval()
losses = []
accs = []
for data in dev_reader():
loss_eval, acc_eval = model(inputs, labels=labels)
losses.append(loss_eval.numpy())
accs.append(acc_eval.numpy())
avg_loss = np.concatenate(losses).mean()
avg_acc = np.concatenate(accs).mean()
print('eval epoch:%d step: %d loss:%f acc:%f' % (epoch, global_step, avg_loss, avg_acc))
log_writer.add_scalar(tag="dev/loss", step=log_step, value=avg_loss)
log_writer.add_scalar(tag="dev/acc", step=log_step, value=avg_acc)
# 保存最佳模型
if avg_acc>max_acc:
max_acc = avg_acc
print('saving the best_model...')
paddle.save(model.state_dict(), 'best_model')
model.train()
# 反向传播
loss.backward()
# 使用优化器进行参数优化
opt.step()
# 清除梯度
opt.clear_grad()
# 全局步数加一
global_step += 1
# 保存最终模型
paddle.save(model.state_dict(),'final_model')
###Output
W0509 16:43:23.286460 233 gpu_context.cc:278] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.2, Runtime API Version: 10.1
W0509 16:43:23.291019 233 gpu_context.cc:306] device: 0, cuDNN Version: 7.6.
###Markdown
五、模型测试* 使用保存的最佳模型进行测试
###Code
# 反转字符表
label_dict_adv = {v: k for k, v in label_dict.items()}
# 输入计算题目
input_text = '12+40'
# 编码输入为ID
inputs = encoder(input_text, MAXLEN, label_dict)
# 转换输入为向量形式
inputs = np.array(inputs).reshape(-1, MAXLEN)
inputs = paddle.to_tensor(inputs)
# 加载模型
params_dict= paddle.load('best_model')
model.set_dict(params_dict)
# 设置为评估模式
model.eval()
# 模型推理
out = model(inputs)
# 结果转换
result = ''.join([label_dict_adv[_] for _ in np.argmax(out.numpy(), -1).reshape(-1)])
# 打印结果
print('the model answer: %s=%s' % (input_text, result))
print('the true answer: %s=%s' % (input_text, eval(input_text)))
###Output
the model answer: 12+40=52
the true answer: 12+40=52
###Markdown
使用序列到序列模型完成数字加法**作者:** [jm12138](https://github.com/jm12138) **日期:** 2022.4 **摘要:** 本示例介绍如何使用飞桨完成一个数字加法任务,将会使用飞桨提供的`LSTM`,组建一个序列到序列模型,并在随机生成的数据集上完成数字加法任务的模型训练与预测。 一、环境配置本教程基于PaddlePaddle 2.3.0-rc0 编写,如果你的环境不是本版本,请先参考官网[安装](https://www.paddlepaddle.org.cn/install/quick) PaddlePaddle 2.3.0-rc0。
###Code
# 导入项目运行所需的包
import paddle
import paddle.nn as nn
import random
import numpy as np
from visualdl import LogWriter
# 打印Paddle版本
print('paddle version: %s' % paddle.__version__)
###Output
paddle version: 2.3.0-rc0
###Markdown
二、构建数据集* 随机生成数据,并使用生成的数据构造数据集* 通过继承 ``paddle.io.Dataset`` 来完成数据集的构造
###Code
# 编码函数
def encoder(text, LEN, label_dict):
# 文本转ID
ids = [label_dict[word] for word in text]
# 对长度进行补齐
ids += [label_dict[' ']]*(LEN-len(ids))
return ids
# 单个数据生成函数
def make_data(inputs, labels, DIGITS, label_dict):
MAXLEN = DIGITS + 1 + DIGITS
# 对输入输出文本进行ID编码
inputs = encoder(inputs, MAXLEN, label_dict)
labels = encoder(labels, DIGITS + 1, label_dict)
return inputs, labels
# 批量数据生成函数
def gen_datas(DATA_NUM, MAX_NUM, DIGITS, label_dict):
datas = []
while len(datas)<DATA_NUM:
# 随机取两个数
a = random.randint(0,MAX_NUM)
b = random.randint(0,MAX_NUM)
# 生成输入文本
inputs = '%d+%d' % (a, b)
# 生成输出文本
labels = str(eval(inputs))
# 生成单个数据
inputs, labels = [np.array(_).astype('int64') for _ in make_data(inputs, labels, DIGITS, label_dict)]
datas.append([inputs, labels])
return datas
# 继承paddle.io.Dataset来构造数据集
class Addition_Dataset(paddle.io.Dataset):
# 重写数据集初始化函数
def __init__(self, datas):
super(Addition_Dataset, self).__init__()
self.datas = datas
# 重写生成样本的函数
def __getitem__(self, index):
data, label = [paddle.to_tensor(_) for _ in self.datas[index]]
return data, label
# 重写返回数据集大小的函数
def __len__(self):
return len(self.datas)
print('generating datas..')
# 定义字符表
label_dict = {
'0': 0, '1': 1, '2': 2, '3': 3,
'4': 4, '5': 5, '6': 6, '7': 7,
'8': 8, '9': 9, '+': 10, ' ': 11
}
# 输入数字最大位数
DIGITS = 2
# 数据数量
train_num = 5000
dev_num = 500
# 数据批大小
batch_size = 32
# 读取线程数
num_workers = 8
# 定义一些所需变量
MAXLEN = DIGITS + 1 + DIGITS
MAX_NUM = 10**(DIGITS)-1
# 生成数据
train_datas = gen_datas(
train_num,
MAX_NUM,
DIGITS,
label_dict
)
dev_datas = gen_datas(
dev_num,
MAX_NUM,
DIGITS,
label_dict
)
# 实例化数据集
train_dataset = Addition_Dataset(train_datas)
dev_dataset = Addition_Dataset(dev_datas)
print('making the dataset...')
# 实例化数据读取器
train_reader = paddle.io.DataLoader(
train_dataset,
batch_size=batch_size,
shuffle=True,
drop_last=True
)
dev_reader = paddle.io.DataLoader(
dev_dataset,
batch_size=batch_size,
shuffle=False,
drop_last=True
)
print('finish')
###Output
generating datas..
making the dataset...
finish
###Markdown
三、模型组网* 通过继承 ``paddle.nn.Layer`` 类来搭建模型* 本次介绍的模型是一个简单的基于 ``LSTM`` 的 ``Seq2Seq`` 模型* 一共有如下四个主要的网络层: 1. 嵌入层(``Embedding``):将输入的文本序列转为嵌入向量 2. 编码层(``LSTM``):将嵌入向量进行编码 3. 解码层(``LSTM``):将编码向量进行解码 4. 全连接层(``Linear``):对解码完成的向量进行线性映射* 损失函数为交叉熵损失函数
###Code
# 继承paddle.nn.Layer类
class Addition_Model(nn.Layer):
# 重写初始化函数
# 参数:字符表长度、嵌入层大小、隐藏层大小、解码器层数、处理数字的最大位数
def __init__(self, char_len=12, embedding_size=128, hidden_size=128, num_layers=1, DIGITS=2):
super(Addition_Model, self).__init__()
# 初始化变量
self.DIGITS = DIGITS
self.MAXLEN = DIGITS + 1 + DIGITS
self.hidden_size = hidden_size
self.char_len = char_len
# 嵌入层
self.emb = nn.Embedding(
char_len,
embedding_size
)
# 编码器
self.encoder = nn.LSTM(
input_size=embedding_size,
hidden_size=hidden_size,
num_layers=1
)
# 解码器
self.decoder = nn.LSTM(
input_size=hidden_size,
hidden_size=hidden_size,
num_layers=num_layers
)
# 全连接层
self.fc = nn.Linear(
hidden_size,
char_len
)
# 重写模型前向计算函数
# 参数:输入[None, MAXLEN]、标签[None, DIGITS + 1]
def forward(self, inputs, labels=None):
# 嵌入层
out = self.emb(inputs)
# 编码器
out, (_, _) = self.encoder(out)
# 按时间步切分编码器输出
out = paddle.split(out, self.MAXLEN, axis=1)
# 取最后一个时间步的输出并复制 DIGITS + 1 次
out = paddle.expand(out[-1], [out[-1].shape[0], self.DIGITS + 1, self.hidden_size])
# 解码器
out, (_, _) = self.decoder(out)
# 全连接
out = self.fc(out)
# 如果标签存在,则计算其损失和准确率
if labels is not None:
# 计算交叉熵损失
loss = nn.functional.cross_entropy(out, labels)
# 计算准确率
acc = paddle.metric.accuracy(paddle.reshape(out, [-1, self.char_len]), paddle.reshape(labels, [-1, 1]))
# 返回损失和准确率
return loss, acc
# 返回输出
return out
###Output
_____no_output_____
###Markdown
四、模型训练与评估* 使用 ``Adam`` 作为优化器进行模型训练* 以模型准确率作为评价指标* 使用 ``VisualDL`` 对训练数据进行可视化* 训练过程中会同时进行模型评估和最佳模型的保存
###Code
# 初始化log写入器
log_writer = LogWriter(logdir="./log")
# 模型参数设置
embedding_size = 128
hidden_size=128
num_layers=1
# 训练参数设置
epoch_num = 50
learning_rate = 0.001
log_iter = 2000
eval_iter = 500
# 定义一些所需变量
global_step = 0
log_step = 0
max_acc = 0
# 实例化模型
model = Addition_Model(
char_len=len(label_dict),
embedding_size=embedding_size,
hidden_size=hidden_size,
num_layers=num_layers,
DIGITS=DIGITS)
# 将模型设置为训练模式
model.train()
# 设置优化器,学习率,并且把模型参数给优化器
opt = paddle.optimizer.Adam(
learning_rate=learning_rate,
parameters=model.parameters()
)
# 启动训练,循环epoch_num个轮次
for epoch in range(epoch_num):
# 遍历数据集读取数据
for batch_id, data in enumerate(train_reader()):
# 读取数据
inputs, labels = data
# 模型前向计算
loss, acc = model(inputs, labels=labels)
# 打印训练数据
if global_step%log_iter==0:
print('train epoch:%d step: %d loss:%f acc:%f' % (epoch, global_step, loss.numpy(), acc.numpy()))
log_writer.add_scalar(tag="train/loss", step=log_step, value=loss.numpy())
log_writer.add_scalar(tag="train/acc", step=log_step, value=acc.numpy())
log_step+=1
# 模型验证
if global_step%eval_iter==0:
model.eval()
losses = []
accs = []
for data in dev_reader():
loss_eval, acc_eval = model(inputs, labels=labels)
losses.append(loss_eval.numpy())
accs.append(acc_eval.numpy())
avg_loss = np.concatenate(losses).mean()
avg_acc = np.concatenate(accs).mean()
print('eval epoch:%d step: %d loss:%f acc:%f' % (epoch, global_step, avg_loss, avg_acc))
log_writer.add_scalar(tag="dev/loss", step=log_step, value=avg_loss)
log_writer.add_scalar(tag="dev/acc", step=log_step, value=avg_acc)
# 保存最佳模型
if avg_acc>max_acc:
max_acc = avg_acc
print('saving the best_model...')
paddle.save(model.state_dict(), 'best_model')
model.train()
# 反向传播
loss.backward()
# 使用优化器进行参数优化
opt.step()
# 清除梯度
opt.clear_grad()
# 全局步数加一
global_step += 1
# 保存最终模型
paddle.save(model.state_dict(),'final_model')
###Output
W0422 17:48:54.917449 149 gpu_context.cc:244] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.1, Runtime API Version: 10.1
W0422 17:48:54.922053 149 gpu_context.cc:272] device: 0, cuDNN Version: 7.6.
###Markdown
五、模型测试* 使用保存的最佳模型进行测试
###Code
# 反转字符表
label_dict_adv = {v: k for k, v in label_dict.items()}
# 输入计算题目
input_text = '12+40'
# 编码输入为ID
inputs = encoder(input_text, MAXLEN, label_dict)
# 转换输入为向量形式
inputs = np.array(inputs).reshape(-1, MAXLEN)
inputs = paddle.to_tensor(inputs)
# 加载模型
params_dict= paddle.load('best_model')
model.set_dict(params_dict)
# 设置为评估模式
model.eval()
# 模型推理
out = model(inputs)
# 结果转换
result = ''.join([label_dict_adv[_] for _ in np.argmax(out.numpy(), -1).reshape(-1)])
# 打印结果
print('the model answer: %s=%s' % (input_text, result))
print('the true answer: %s=%s' % (input_text, eval(input_text)))
###Output
the model answer: 12+40=52
the true answer: 12+40=52
###Markdown
使用序列到序列模型完成数字加法**作者:** [jm12138](https://github.com/jm12138) **日期:** 2022.1 **摘要:** 本示例介绍如何使用飞桨完成一个数字加法任务,将会使用飞桨提供的`LSTM`,组建一个序列到序列模型,并在随机生成的数据集上完成数字加法任务的模型训练与预测。 一、环境配置本教程基于Paddle 2.2 编写,如果你的环境不是本版本,请先参考官网[安装](https://www.paddlepaddle.org.cn/install/quick) Paddle 2.2。
###Code
# 导入项目运行所需的包
import paddle
import paddle.nn as nn
import random
import numpy as np
from visualdl import LogWriter
# 打印Paddle版本
print('paddle version: %s' % paddle.__version__)
###Output
paddle version: 2.2.2
###Markdown
二、构建数据集* 随机生成数据,并使用生成的数据构造数据集* 通过继承 ``paddle.io.Dataset`` 来完成数据集的构造
###Code
# 编码函数
def encoder(text, LEN, label_dict):
# 文本转ID
ids = [label_dict[word] for word in text]
# 对长度进行补齐
ids += [label_dict[' ']]*(LEN-len(ids))
return ids
# 单个数据生成函数
def make_data(inputs, labels, DIGITS, label_dict):
MAXLEN = DIGITS + 1 + DIGITS
# 对输入输出文本进行ID编码
inputs = encoder(inputs, MAXLEN, label_dict)
labels = encoder(labels, DIGITS + 1, label_dict)
return inputs, labels
# 批量数据生成函数
def gen_datas(DATA_NUM, MAX_NUM, DIGITS, label_dict):
datas = []
while len(datas)<DATA_NUM:
# 随机取两个数
a = random.randint(0,MAX_NUM)
b = random.randint(0,MAX_NUM)
# 生成输入文本
inputs = '%d+%d' % (a, b)
# 生成输出文本
labels = str(eval(inputs))
# 生成单个数据
inputs, labels = [np.array(_).astype('int64') for _ in make_data(inputs, labels, DIGITS, label_dict)]
datas.append([inputs, labels])
return datas
# 继承paddle.io.Dataset来构造数据集
class Addition_Dataset(paddle.io.Dataset):
# 重写数据集初始化函数
def __init__(self, datas):
super(Addition_Dataset, self).__init__()
self.datas = datas
# 重写生成样本的函数
def __getitem__(self, index):
data, label = [paddle.to_tensor(_) for _ in self.datas[index]]
return data, label
# 重写返回数据集大小的函数
def __len__(self):
return len(self.datas)
print('generating datas..')
# 定义字符表
label_dict = {
'0': 0, '1': 1, '2': 2, '3': 3,
'4': 4, '5': 5, '6': 6, '7': 7,
'8': 8, '9': 9, '+': 10, ' ': 11
}
# 输入数字最大位数
DIGITS = 2
# 数据数量
train_num = 5000
dev_num = 500
# 数据批大小
batch_size = 32
# 读取线程数
num_workers = 8
# 定义一些所需变量
MAXLEN = DIGITS + 1 + DIGITS
MAX_NUM = 10**(DIGITS)-1
# 生成数据
train_datas = gen_datas(
train_num,
MAX_NUM,
DIGITS,
label_dict
)
dev_datas = gen_datas(
dev_num,
MAX_NUM,
DIGITS,
label_dict
)
# 实例化数据集
train_dataset = Addition_Dataset(train_datas)
dev_dataset = Addition_Dataset(dev_datas)
print('making the dataset...')
# 实例化数据读取器
train_reader = paddle.io.DataLoader(
train_dataset,
batch_size=batch_size,
shuffle=True,
drop_last=True
)
dev_reader = paddle.io.DataLoader(
dev_dataset,
batch_size=batch_size,
shuffle=False,
drop_last=True
)
print('finish')
###Output
generating datas..
making the dataset...
finish
###Markdown
三、模型组网* 通过继承 ``paddle.nn.Layer`` 类来搭建模型* 本次介绍的模型是一个简单的基于 ``LSTM`` 的 ``Seq2Seq`` 模型* 一共有如下四个主要的网络层: 1. 嵌入层(``Embedding``):将输入的文本序列转为嵌入向量 2. 编码层(``LSTM``):将嵌入向量进行编码 3. 解码层(``LSTM``):将编码向量进行解码 4. 全连接层(``Linear``):对解码完成的向量进行线性映射* 损失函数为交叉熵损失函数
###Code
# 继承paddle.nn.Layer类
class Addition_Model(nn.Layer):
# 重写初始化函数
# 参数:字符表长度、嵌入层大小、隐藏层大小、解码器层数、处理数字的最大位数
def __init__(self, char_len=12, embedding_size=128, hidden_size=128, num_layers=1, DIGITS=2):
super(Addition_Model, self).__init__()
# 初始化变量
self.DIGITS = DIGITS
self.MAXLEN = DIGITS + 1 + DIGITS
self.hidden_size = hidden_size
self.char_len = char_len
# 嵌入层
self.emb = nn.Embedding(
char_len,
embedding_size
)
# 编码器
self.encoder = nn.LSTM(
input_size=embedding_size,
hidden_size=hidden_size,
num_layers=1
)
# 解码器
self.decoder = nn.LSTM(
input_size=hidden_size,
hidden_size=hidden_size,
num_layers=num_layers
)
# 全连接层
self.fc = nn.Linear(
hidden_size,
char_len
)
# 重写模型前向计算函数
# 参数:输入[None, MAXLEN]、标签[None, DIGITS + 1]
def forward(self, inputs, labels=None):
# 嵌入层
out = self.emb(inputs)
# 编码器
out, (_, _) = self.encoder(out)
# 按时间步切分编码器输出
out = paddle.split(out, self.MAXLEN, axis=1)
# 取最后一个时间步的输出并复制 DIGITS + 1 次
out = paddle.expand(out[-1], [out[-1].shape[0], self.DIGITS + 1, self.hidden_size])
# 解码器
out, (_, _) = self.decoder(out)
# 全连接
out = self.fc(out)
# 如果标签存在,则计算其损失和准确率
if labels is not None:
# 计算交叉熵损失
loss = nn.functional.cross_entropy(out, labels)
# 计算准确率
acc = paddle.metric.accuracy(paddle.reshape(out, [-1, self.char_len]), paddle.reshape(labels, [-1, 1]))
# 返回损失和准确率
return loss, acc
# 返回输出
return out
###Output
_____no_output_____
###Markdown
四、模型训练与评估* 使用 ``Adam`` 作为优化器进行模型训练* 以模型准确率作为评价指标* 使用 ``VisualDL`` 对训练数据进行可视化* 训练过程中会同时进行模型评估和最佳模型的保存
###Code
# 初始化log写入器
log_writer = LogWriter(logdir="./log")
# 模型参数设置
embedding_size = 128
hidden_size=128
num_layers=1
# 训练参数设置
epoch_num = 50
learning_rate = 0.001
log_iter = 2000
eval_iter = 500
# 定义一些所需变量
global_step = 0
log_step = 0
max_acc = 0
# 实例化模型
model = Addition_Model(
char_len=len(label_dict),
embedding_size=embedding_size,
hidden_size=hidden_size,
num_layers=num_layers,
DIGITS=DIGITS)
# 将模型设置为训练模式
model.train()
# 设置优化器,学习率,并且把模型参数给优化器
opt = paddle.optimizer.Adam(
learning_rate=learning_rate,
parameters=model.parameters()
)
# 启动训练,循环epoch_num个轮次
for epoch in range(epoch_num):
# 遍历数据集读取数据
for batch_id, data in enumerate(train_reader()):
# 读取数据
inputs, labels = data
# 模型前向计算
loss, acc = model(inputs, labels=labels)
# 打印训练数据
if global_step%log_iter==0:
print('train epoch:%d step: %d loss:%f acc:%f' % (epoch, global_step, loss.numpy(), acc.numpy()))
log_writer.add_scalar(tag="train/loss", step=log_step, value=loss.numpy())
log_writer.add_scalar(tag="train/acc", step=log_step, value=acc.numpy())
log_step+=1
# 模型验证
if global_step%eval_iter==0:
model.eval()
losses = []
accs = []
for data in dev_reader():
loss_eval, acc_eval = model(inputs, labels=labels)
losses.append(loss_eval.numpy())
accs.append(acc_eval.numpy())
avg_loss = np.concatenate(losses).mean()
avg_acc = np.concatenate(accs).mean()
print('eval epoch:%d step: %d loss:%f acc:%f' % (epoch, global_step, avg_loss, avg_acc))
log_writer.add_scalar(tag="dev/loss", step=log_step, value=avg_loss)
log_writer.add_scalar(tag="dev/acc", step=log_step, value=avg_acc)
# 保存最佳模型
if avg_acc>max_acc:
max_acc = avg_acc
print('saving the best_model...')
paddle.save(model.state_dict(), 'best_model')
model.train()
# 反向传播
loss.backward()
# 使用优化器进行参数优化
opt.step()
# 清除梯度
opt.clear_grad()
# 全局步数加一
global_step += 1
# 保存最终模型
paddle.save(model.state_dict(),'final_model')
###Output
train epoch:0 step: 0 loss:2.495761 acc:0.062500
eval epoch:0 step: 0 loss:2.495761 acc:0.062500
saving the best_model...
eval epoch:3 step: 500 loss:1.171557 acc:0.572917
saving the best_model...
eval epoch:6 step: 1000 loss:0.944137 acc:0.666667
saving the best_model...
eval epoch:9 step: 1500 loss:0.823990 acc:0.718750
saving the best_model...
train epoch:12 step: 2000 loss:0.759228 acc:0.729167
eval epoch:12 step: 2000 loss:0.759228 acc:0.729167
saving the best_model...
eval epoch:16 step: 2500 loss:0.596108 acc:0.812500
saving the best_model...
eval epoch:19 step: 3000 loss:0.261857 acc:0.947917
saving the best_model...
eval epoch:22 step: 3500 loss:0.115905 acc:0.979167
saving the best_model...
train epoch:25 step: 4000 loss:0.061168 acc:1.000000
eval epoch:25 step: 4000 loss:0.061168 acc:1.000000
saving the best_model...
eval epoch:28 step: 4500 loss:0.064226 acc:0.979167
eval epoch:32 step: 5000 loss:0.043079 acc:0.989583
eval epoch:35 step: 5500 loss:0.226035 acc:0.916667
train epoch:38 step: 6000 loss:0.008072 acc:1.000000
eval epoch:38 step: 6000 loss:0.008072 acc:1.000000
eval epoch:41 step: 6500 loss:0.005520 acc:1.000000
eval epoch:44 step: 7000 loss:0.004417 acc:1.000000
eval epoch:48 step: 7500 loss:0.003517 acc:1.000000
###Markdown
五、模型测试* 使用保存的最佳模型进行测试
###Code
# 反转字符表
label_dict_adv = {v: k for k, v in label_dict.items()}
# 输入计算题目
input_text = '12+40'
# 编码输入为ID
inputs = encoder(input_text, MAXLEN, label_dict)
# 转换输入为向量形式
inputs = np.array(inputs).reshape(-1, MAXLEN)
inputs = paddle.to_tensor(inputs)
# 加载模型
params_dict= paddle.load('best_model')
model.set_dict(params_dict)
# 设置为评估模式
model.eval()
# 模型推理
out = model(inputs)
# 结果转换
result = ''.join([label_dict_adv[_] for _ in np.argmax(out.numpy(), -1).reshape(-1)])
# 打印结果
print('the model answer: %s=%s' % (input_text, result))
print('the true answer: %s=%s' % (input_text, eval(input_text)))
###Output
the model answer: 12+40=52
the true answer: 12+40=52
###Markdown
使用序列到序列模型完成数字加法**作者:** [jm12138](https://github.com/jm12138) **日期:** 2021.11 **摘要:** 本示例介绍如何使用飞桨完成一个数字加法任务,将会使用飞桨提供的`LSTM`,组建一个序列到序列模型,并在随机生成的数据集上完成数字加法任务的模型训练与预测。 一、环境配置本教程基于Paddle 2.2.0 编写,如果你的环境不是本版本,请先参考官网[安装](https://www.paddlepaddle.org.cn/install/quick) Paddle 2.2.0。
###Code
# 导入项目运行所需的包
import paddle
import paddle.nn as nn
import random
import numpy as np
from visualdl import LogWriter
# 打印Paddle版本
print('paddle version: %s' % paddle.__version__)
###Output
paddle version: 2.2.0
###Markdown
二、构建数据集* 随机生成数据,并使用生成的数据构造数据集* 通过继承 ``paddle.io.Dataset`` 来完成数据集的构造
###Code
# 编码函数
def encoder(text, LEN, label_dict):
# 文本转ID
ids = [label_dict[word] for word in text]
# 对长度进行补齐
ids += [label_dict[' ']]*(LEN-len(ids))
return ids
# 单个数据生成函数
def make_data(inputs, labels, DIGITS, label_dict):
MAXLEN = DIGITS + 1 + DIGITS
# 对输入输出文本进行ID编码
inputs = encoder(inputs, MAXLEN, label_dict)
labels = encoder(labels, DIGITS + 1, label_dict)
return inputs, labels
# 批量数据生成函数
def gen_datas(DATA_NUM, MAX_NUM, DIGITS, label_dict):
datas = []
while len(datas)<DATA_NUM:
# 随机取两个数
a = random.randint(0,MAX_NUM)
b = random.randint(0,MAX_NUM)
# 生成输入文本
inputs = '%d+%d' % (a, b)
# 生成输出文本
labels = str(eval(inputs))
# 生成单个数据
inputs, labels = [np.array(_).astype('int64') for _ in make_data(inputs, labels, DIGITS, label_dict)]
datas.append([inputs, labels])
return datas
# 继承paddle.io.Dataset来构造数据集
class Addition_Dataset(paddle.io.Dataset):
# 重写数据集初始化函数
def __init__(self, datas):
super(Addition_Dataset, self).__init__()
self.datas = datas
# 重写生成样本的函数
def __getitem__(self, index):
data, label = [paddle.to_tensor(_) for _ in self.datas[index]]
return data, label
# 重写返回数据集大小的函数
def __len__(self):
return len(self.datas)
print('generating datas..')
# 定义字符表
label_dict = {
'0': 0, '1': 1, '2': 2, '3': 3,
'4': 4, '5': 5, '6': 6, '7': 7,
'8': 8, '9': 9, '+': 10, ' ': 11
}
# 输入数字最大位数
DIGITS = 2
# 数据数量
train_num = 5000
dev_num = 500
# 数据批大小
batch_size = 32
# 读取线程数
num_workers = 8
# 定义一些所需变量
MAXLEN = DIGITS + 1 + DIGITS
MAX_NUM = 10**(DIGITS)-1
# 生成数据
train_datas = gen_datas(
train_num,
MAX_NUM,
DIGITS,
label_dict
)
dev_datas = gen_datas(
dev_num,
MAX_NUM,
DIGITS,
label_dict
)
# 实例化数据集
train_dataset = Addition_Dataset(train_datas)
dev_dataset = Addition_Dataset(dev_datas)
print('making the dataset...')
# 实例化数据读取器
train_reader = paddle.io.DataLoader(
train_dataset,
batch_size=batch_size,
shuffle=True,
drop_last=True
)
dev_reader = paddle.io.DataLoader(
dev_dataset,
batch_size=batch_size,
shuffle=False,
drop_last=True
)
print('finish')
###Output
generating datas..
making the dataset...
finish
###Markdown
三、模型组网* 通过继承 ``paddle.nn.Layer`` 类来搭建模型* 本次介绍的模型是一个简单的基于 ``LSTM`` 的 ``Seq2Seq`` 模型* 一共有如下四个主要的网络层: 1. 嵌入层(``Embedding``):将输入的文本序列转为嵌入向量 2. 编码层(``LSTM``):将嵌入向量进行编码 3. 解码层(``LSTM``):将编码向量进行解码 4. 全连接层(``Linear``):对解码完成的向量进行线性映射* 损失函数为交叉熵损失函数
###Code
# 继承paddle.nn.Layer类
class Addition_Model(nn.Layer):
# 重写初始化函数
# 参数:字符表长度、嵌入层大小、隐藏层大小、解码器层数、处理数字的最大位数
def __init__(self, char_len=12, embedding_size=128, hidden_size=128, num_layers=1, DIGITS=2):
super(Addition_Model, self).__init__()
# 初始化变量
self.DIGITS = DIGITS
self.MAXLEN = DIGITS + 1 + DIGITS
self.hidden_size = hidden_size
self.char_len = char_len
# 嵌入层
self.emb = nn.Embedding(
char_len,
embedding_size
)
# 编码器
self.encoder = nn.LSTM(
input_size=embedding_size,
hidden_size=hidden_size,
num_layers=1
)
# 解码器
self.decoder = nn.LSTM(
input_size=hidden_size,
hidden_size=hidden_size,
num_layers=num_layers
)
# 全连接层
self.fc = nn.Linear(
hidden_size,
char_len
)
# 重写模型前向计算函数
# 参数:输入[None, MAXLEN]、标签[None, DIGITS + 1]
def forward(self, inputs, labels=None):
# 嵌入层
out = self.emb(inputs)
# 编码器
out, (_, _) = self.encoder(out)
# 按时间步切分编码器输出
out = paddle.split(out, self.MAXLEN, axis=1)
# 取最后一个时间步的输出并复制 DIGITS + 1 次
out = paddle.expand(out[-1], [out[-1].shape[0], self.DIGITS + 1, self.hidden_size])
# 解码器
out, (_, _) = self.decoder(out)
# 全连接
out = self.fc(out)
# 如果标签存在,则计算其损失和准确率
if labels is not None:
# 计算交叉熵损失
loss = nn.functional.cross_entropy(out, labels)
# 计算准确率
acc = paddle.metric.accuracy(paddle.reshape(out, [-1, self.char_len]), paddle.reshape(labels, [-1, 1]))
# 返回损失和准确率
return loss, acc
# 返回输出
return out
###Output
_____no_output_____
###Markdown
四、模型训练与评估* 使用 ``Adam`` 作为优化器进行模型训练* 以模型准确率作为评价指标* 使用 ``VisualDL`` 对训练数据进行可视化* 训练过程中会同时进行模型评估和最佳模型的保存
###Code
# 初始化log写入器
log_writer = LogWriter(logdir="./log")
# 模型参数设置
embedding_size = 128
hidden_size=128
num_layers=1
# 训练参数设置
epoch_num = 50
learning_rate = 0.001
log_iter = 2000
eval_iter = 500
# 定义一些所需变量
global_step = 0
log_step = 0
max_acc = 0
# 实例化模型
model = Addition_Model(
char_len=len(label_dict),
embedding_size=embedding_size,
hidden_size=hidden_size,
num_layers=num_layers,
DIGITS=DIGITS)
# 将模型设置为训练模式
model.train()
# 设置优化器,学习率,并且把模型参数给优化器
opt = paddle.optimizer.Adam(
learning_rate=learning_rate,
parameters=model.parameters()
)
# 启动训练,循环epoch_num个轮次
for epoch in range(epoch_num):
# 遍历数据集读取数据
for batch_id, data in enumerate(train_reader()):
# 读取数据
inputs, labels = data
# 模型前向计算
loss, acc = model(inputs, labels=labels)
# 打印训练数据
if global_step%log_iter==0:
print('train epoch:%d step: %d loss:%f acc:%f' % (epoch, global_step, loss.numpy(), acc.numpy()))
log_writer.add_scalar(tag="train/loss", step=log_step, value=loss.numpy())
log_writer.add_scalar(tag="train/acc", step=log_step, value=acc.numpy())
log_step+=1
# 模型验证
if global_step%eval_iter==0:
model.eval()
losses = []
accs = []
for data in dev_reader():
loss_eval, acc_eval = model(inputs, labels=labels)
losses.append(loss_eval.numpy())
accs.append(acc_eval.numpy())
avg_loss = np.concatenate(losses).mean()
avg_acc = np.concatenate(accs).mean()
print('eval epoch:%d step: %d loss:%f acc:%f' % (epoch, global_step, avg_loss, avg_acc))
log_writer.add_scalar(tag="dev/loss", step=log_step, value=avg_loss)
log_writer.add_scalar(tag="dev/acc", step=log_step, value=avg_acc)
# 保存最佳模型
if avg_acc>max_acc:
max_acc = avg_acc
print('saving the best_model...')
paddle.save(model.state_dict(), 'best_model')
model.train()
# 反向传播
loss.backward()
# 使用优化器进行参数优化
opt.step()
# 清除梯度
opt.clear_grad()
# 全局步数加一
global_step += 1
# 保存最终模型
paddle.save(model.state_dict(),'final_model')
###Output
train epoch:0 step: 0 loss:2.489843 acc:0.072917
eval epoch:0 step: 0 loss:2.489844 acc:0.072917
saving the best_model...
eval epoch:3 step: 500 loss:1.132963 acc:0.583333
saving the best_model...
eval epoch:6 step: 1000 loss:0.922499 acc:0.718750
saving the best_model...
eval epoch:9 step: 1500 loss:0.833021 acc:0.666667
train epoch:12 step: 2000 loss:0.732612 acc:0.739583
eval epoch:12 step: 2000 loss:0.732612 acc:0.739583
saving the best_model...
eval epoch:16 step: 2500 loss:0.448837 acc:0.812500
saving the best_model...
eval epoch:19 step: 3000 loss:0.225695 acc:0.947917
saving the best_model...
eval epoch:22 step: 3500 loss:0.099140 acc:0.989583
saving the best_model...
train epoch:25 step: 4000 loss:0.065642 acc:1.000000
eval epoch:25 step: 4000 loss:0.065642 acc:1.000000
saving the best_model...
eval epoch:28 step: 4500 loss:0.033392 acc:1.000000
eval epoch:32 step: 5000 loss:0.020793 acc:1.000000
eval epoch:35 step: 5500 loss:0.021470 acc:1.000000
train epoch:38 step: 6000 loss:0.015860 acc:1.000000
eval epoch:38 step: 6000 loss:0.015860 acc:1.000000
eval epoch:41 step: 6500 loss:0.008177 acc:1.000000
eval epoch:44 step: 7000 loss:0.004767 acc:1.000000
eval epoch:48 step: 7500 loss:0.003457 acc:1.000000
###Markdown
五、模型测试* 使用保存的最佳模型进行测试
###Code
# 反转字符表
label_dict_adv = {v: k for k, v in label_dict.items()}
# 输入计算题目
input_text = '12+40'
# 编码输入为ID
inputs = encoder(input_text, MAXLEN, label_dict)
# 转换输入为向量形式
inputs = np.array(inputs).reshape(-1, MAXLEN)
inputs = paddle.to_tensor(inputs)
# 加载模型
params_dict= paddle.load('best_model')
model.set_dict(params_dict)
# 设置为评估模式
model.eval()
# 模型推理
out = model(inputs)
# 结果转换
result = ''.join([label_dict_adv[_] for _ in np.argmax(out.numpy(), -1).reshape(-1)])
# 打印结果
print('the model answer: %s=%s' % (input_text, result))
print('the true answer: %s=%s' % (input_text, eval(input_text)))
###Output
the model answer: 12+40=52
the true answer: 12+40=52
|
site/ja/tutorials/customization/custom_training.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
カスタム訓練:基本 View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook 前のチュートリアルでは、機械学習の基本構成ブロックの1つである自動微分について TensorFlow の API を学習しました。このチュートリアルでは、これまでのチュートリアルに出てきた TensorFlow の基本要素を使って、単純な機械学習を実行します。TensorFlow には `tf.keras` が含まれています。`tf.keras`は、抽象化により決まり切った記述を削減し、柔軟さと性能を犠牲にすることなく TensorFlow をやさしく使えるようにする、高度なニューラルネットワーク API です。開発には [tf.Keras API](../../guide/keras/overview.ipynb) を使うことを強くおすすめします。しかしながら、この短いチュートリアルでは、しっかりした基礎を身につけていただくために、ニューラルネットワークの訓練についていちから学ぶことにします。 設定
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
###Output
_____no_output_____
###Markdown
変数TensorFlow のテンソルはイミュータブルでステートレスなオブジェクトです。しかしながら、機械学習モデルには変化する状態が必要です。モデルの訓練が進むにつれて、推論を行うおなじコードが異なる振る舞いをする必要があります(望むべくはより損失の少なくなるように)。この計算が進むにつれて変化する必要がある状態を表現するために、Python が状態を保つプログラミング言語であることを利用することができます。
###Code
# Python の状態を使う
x = tf.zeros([10, 10])
x += 2 # これは x = x + 2 と等価で, x の元の値を変えているわけではない
print(x)
###Output
_____no_output_____
###Markdown
TensorFlow にはステートフルな演算が組み込まれているので、状態を表現するのに低レベルの Python による表現を使うよりは簡単なことがしばしばあります。`tf.Variable`オブジェクトは値を保持し、何も指示しなくともこの保存された値を読み出します。TensorFlow の変数に保持された値を操作する演算(`tf.assign_sub`, `tf.scatter_update`, など)が用意されています。
###Code
v = tf.Variable(1.0)
# Python の `assert` を条件をテストするデバッグ文として使用
assert v.numpy() == 1.0
# `v` に値を再代入
v.assign(3.0)
assert v.numpy() == 3.0
# `v` に TensorFlow の `tf.square()` 演算を適用し再代入
v.assign(tf.square(v))
assert v.numpy() == 9.0
###Output
_____no_output_____
###Markdown
`tf.Variable`を使った計算は、勾配計算の際に自動的にトレースされます。埋め込みを表す変数では、TensorFlow は既定でスパースな更新を行います。これは計算量やメモリ使用量においてより効率的です。`tf.Variable`はあなたのコードを読む人にその状態の一部がミュータブルであることを示す方法でもあります。 線形モデルの適合これまでに学んだ `Tensor`、 `Variable`、 そして `GradientTape`という概念を使って、簡単なモデルの構築と訓練を行ってみましょう。通常、これには次のようないくつかの手順が含まれます。1. モデルの定義2. 損失関数の定義3. 訓練データの取得4. 訓練データを使って実行し、"optimizer" を使って変数をデータに適合ここでは、`f(x) = x * W + b`という簡単な線形モデルを作ります。このモデルには `W` (重み) と `b` (バイアス) の2つの変数があります。十分訓練されたモデルが `W = 3.0` と `b = 2.0` になるようなデータを人工的に作ります。 モデルの定義変数と計算をカプセル化する単純なクラスを定義してみましょう。
###Code
class Model(object):
def __init__(self):
# 重みを `5.0` に、バイアスを `0.0` に初期化
# 実際には、これらの値は乱数で初期化するべき(例えば `tf.random.normal` を使って)
self.W = tf.Variable(5.0)
self.b = tf.Variable(0.0)
def __call__(self, x):
return self.W * x + self.b
model = Model()
assert model(3.0).numpy() == 15.0
###Output
_____no_output_____
###Markdown
損失関数の定義損失関数は、ある入力値に対するモデルの出力がどれだけ出力の目的値に近いかを測るものです。訓練を通じて、この差異を最小化するのがゴールとなります。最小二乗誤差とも呼ばれる L2 損失を使ってみましょう。
###Code
def loss(predicted_y, target_y):
return tf.reduce_mean(tf.square(predicted_y - target_y))
###Output
_____no_output_____
###Markdown
訓練データの取得最初に、入力にランダムなガウス(正規)分布のノイズを加えることで、訓練用データを生成します。
###Code
TRUE_W = 3.0
TRUE_b = 2.0
NUM_EXAMPLES = 1000
inputs = tf.random.normal(shape=[NUM_EXAMPLES])
noise = tf.random.normal(shape=[NUM_EXAMPLES])
outputs = inputs * TRUE_W + TRUE_b + noise
###Output
_____no_output_____
###Markdown
モデルを訓練する前に、モデルの予測値を赤で、訓練データを青でプロットすることで、損失を可視化します。
###Code
import matplotlib.pyplot as plt
plt.scatter(inputs, outputs, c='b')
plt.scatter(inputs, model(inputs), c='r')
plt.show()
print('Current loss: %1.6f' % loss(model(inputs), outputs).numpy())
###Output
_____no_output_____
###Markdown
訓練ループの定義ネットワークと訓練データが準備できたところで、損失が少なくなるように、重み変数 (`W`) とバイアス変数 (`b`) を更新するために、[gradient descent (勾配降下法)](https://en.wikipedia.org/wiki/Gradient_descent) を使ってモデルを訓練します。勾配降下法にはさまざまな変種があり、我々の推奨する実装である `tf.train.Optimizer` にも含まれています。しかし、ここでは基本原理から構築するという精神で、自動微分を行う `tf.GradientTape` と、値を減少させる `tf.assign_sub` (これは、`tf.assign` と `tf.sub` の組み合わせですが)の力を借りて、この基本計算を実装してみましょう。
###Code
def train(model, inputs, outputs, learning_rate):
with tf.GradientTape() as t:
current_loss = loss(model(inputs), outputs)
dW, db = t.gradient(current_loss, [model.W, model.b])
model.W.assign_sub(learning_rate * dW)
model.b.assign_sub(learning_rate * db)
###Output
_____no_output_____
###Markdown
最後に、訓練データ全体に対して繰り返し実行し、`W` と `b` がどのように変化するかを見てみましょう。
###Code
model = Model()
# 後ほどプロットするために、W 値と b 値の履歴を集める
Ws, bs = [], []
epochs = range(10)
for epoch in epochs:
Ws.append(model.W.numpy())
bs.append(model.b.numpy())
current_loss = loss(model(inputs), outputs)
train(model, inputs, outputs, learning_rate=0.1)
print('Epoch %2d: W=%1.2f b=%1.2f, loss=%2.5f' %
(epoch, Ws[-1], bs[-1], current_loss))
# すべてをプロット
plt.plot(epochs, Ws, 'r',
epochs, bs, 'b')
plt.plot([TRUE_W] * len(epochs), 'r--',
[TRUE_b] * len(epochs), 'b--')
plt.legend(['W', 'b', 'True W', 'True b'])
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
カスタム訓練:基本 View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook 前のチュートリアルでは、機械学習の基本構成ブロックの1つである自動微分について TensorFlow の API を学習しました。このチュートリアルでは、これまでのチュートリアルに出てきた TensorFlow の基本要素を使って、単純な機械学習を実行します。TensorFlow には `tf.keras` が含まれています。`tf.keras`は、抽象化により決まり切った記述を削減し、柔軟さと性能を犠牲にすることなく TensorFlow をやさしく使えるようにする、高度なニューラルネットワーク API です。開発には [tf.Keras API](../../guide/keras/overview.ipynb) を使うことを強くおすすめします。しかしながら、この短いチュートリアルでは、しっかりした基礎を身につけていただくために、ニューラルネットワークの訓練についていちから学ぶことにします。 設定
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
###Output
_____no_output_____
###Markdown
変数TensorFlow のテンソルはイミュータブルでステートレスなオブジェクトです。しかしながら、機械学習モデルには変化する状態が必要です。モデルの訓練が進むにつれて、推論を行うおなじコードが異なる振る舞いをする必要があります(望むべくはより損失の少なくなるように)。この計算が進むにつれて変化する必要がある状態を表現するために、Python が状態を保つプログラミング言語であることを利用することができます。
###Code
# Python の状態を使う
x = tf.zeros([10, 10])
x += 2 # これは x = x + 2 と等価で, x の元の値を変えているわけではない
print(x)
###Output
_____no_output_____
###Markdown
TensorFlow にはステートフルな演算が組み込まれているので、状態を表現するのに低レベルの Python による表現を使うよりは簡単なことがしばしばあります。`tf.Variable`オブジェクトは値を保持し、何も指示しなくともこの保存された値を読み出します。TensorFlow の変数に保持された値を操作する演算(`tf.assign_sub`, `tf.scatter_update`, など)が用意されています。
###Code
v = tf.Variable(1.0)
# Python の `assert` を条件をテストするデバッグ文として使用
assert v.numpy() == 1.0
# `v` に値を再代入
v.assign(3.0)
assert v.numpy() == 3.0
# `v` に TensorFlow の `tf.square()` 演算を適用し再代入
v.assign(tf.square(v))
assert v.numpy() == 9.0
###Output
_____no_output_____
###Markdown
`tf.Variable`を使った計算は、勾配計算の際に自動的にトレースされます。埋め込みを表す変数では、TensorFlow は既定でスパースな更新を行います。これは計算量やメモリ使用量においてより効率的です。`tf.Variable`はあなたのコードを読む人にその状態の一部がミュータブルであることを示す方法でもあります。 線形モデルの適合これまでに学んだ `Tensor`、 `Variable`、 そして `GradientTape`という概念を使って、簡単なモデルの構築と訓練を行ってみましょう。通常、これには次のようないくつかの手順が含まれます。1. モデルの定義2. 損失関数の定義3. 訓練データの取得4. 訓練データを使って実行し、"optimizer" を使って変数をデータに適合ここでは、`f(x) = x * W + b`という簡単な線形モデルを作ります。このモデルには `W` (重み) と `b` (バイアス) の2つの変数があります。十分訓練されたモデルが `W = 3.0` と `b = 2.0` になるようなデータを人工的に作ります。 モデルの定義変数と計算をカプセル化する単純なクラスを定義してみましょう。
###Code
class Model(object):
def __init__(self):
# 重みを `5.0` に、バイアスを `0.0` に初期化
# 実際には、これらの値は乱数で初期化するべき(例えば `tf.random.normal` を使って)
self.W = tf.Variable(5.0)
self.b = tf.Variable(0.0)
def __call__(self, x):
return self.W * x + self.b
model = Model()
assert model(3.0).numpy() == 15.0
###Output
_____no_output_____
###Markdown
損失関数の定義損失関数は、ある入力値に対するモデルの出力がどれだけ出力の目的値に近いかを測るものです。訓練を通じて、この差異を最小化するのがゴールとなります。最小二乗誤差とも呼ばれる L2 損失を使ってみましょう。
###Code
def loss(predicted_y, target_y):
return tf.reduce_mean(tf.square(predicted_y - target_y))
###Output
_____no_output_____
###Markdown
訓練データの取得最初に、入力にランダムなガウス(正規)分布のノイズを加えることで、訓練用データを生成します。
###Code
TRUE_W = 3.0
TRUE_b = 2.0
NUM_EXAMPLES = 1000
inputs = tf.random.normal(shape=[NUM_EXAMPLES])
noise = tf.random.normal(shape=[NUM_EXAMPLES])
outputs = inputs * TRUE_W + TRUE_b + noise
###Output
_____no_output_____
###Markdown
モデルを訓練する前に、モデルの予測値を赤で、訓練データを青でプロットすることで、損失を可視化します。
###Code
import matplotlib.pyplot as plt
plt.scatter(inputs, outputs, c='b')
plt.scatter(inputs, model(inputs), c='r')
plt.show()
print('Current loss: %1.6f' % loss(model(inputs), outputs).numpy())
###Output
_____no_output_____
###Markdown
訓練ループの定義ネットワークと訓練データが準備できたところで、損失が少なくなるように、重み変数 (`W`) とバイアス変数 (`b`) を更新するために、[gradient descent (勾配降下法)](https://en.wikipedia.org/wiki/Gradient_descent) を使ってモデルを訓練します。勾配降下法にはさまざまな変種があり、我々の推奨する実装である `tf.train.Optimizer` にも含まれています。しかし、ここでは基本原理から構築するという精神で、自動微分を行う `tf.GradientTape` と、値を減少させる `tf.assign_sub` (これは、`tf.assign` と `tf.sub` の組み合わせですが)の力を借りて、この基本計算を実装してみましょう。
###Code
def train(model, inputs, outputs, learning_rate):
with tf.GradientTape() as t:
current_loss = loss(model(inputs), outputs)
dW, db = t.gradient(current_loss, [model.W, model.b])
model.W.assign_sub(learning_rate * dW)
model.b.assign_sub(learning_rate * db)
###Output
_____no_output_____
###Markdown
最後に、訓練データ全体に対して繰り返し実行し、`W` と `b` がどのように変化するかを見てみましょう。
###Code
model = Model()
# 後ほどプロットするために、W 値と b 値の履歴を集める
Ws, bs = [], []
epochs = range(10)
for epoch in epochs:
Ws.append(model.W.numpy())
bs.append(model.b.numpy())
current_loss = loss(model(inputs), outputs)
train(model, inputs, outputs, learning_rate=0.1)
print('Epoch %2d: W=%1.2f b=%1.2f, loss=%2.5f' %
(epoch, Ws[-1], bs[-1], current_loss))
# すべてをプロット
plt.plot(epochs, Ws, 'r',
epochs, bs, 'b')
plt.plot([TRUE_W] * len(epochs), 'r--',
[TRUE_b] * len(epochs), 'b--')
plt.legend(['W', 'b', 'True W', 'True b'])
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
カスタム訓練:基本 View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook 前のチュートリアルでは、機械学習の基本構成ブロックの1つである自動微分について TensorFlow の API を学習しました。このチュートリアルでは、これまでのチュートリアルに出てきた TensorFlow の基本要素を使って、単純な機械学習を実行します。TensorFlow には `tf.keras` が含まれています。`tf.keras`は、抽象化により決まり切った記述を削減し、柔軟さと性能を犠牲にすることなく TensorFlow をやさしく使えるようにする、高度なニューラルネットワーク API です。開発には [tf.Keras API](../../guide/keras/overview.ipynb) を使うことを強くおすすめします。しかしながら、この短いチュートリアルでは、しっかりした基礎を身につけていただくために、ニューラルネットワークの訓練についていちから学ぶことにします。 設定
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
変数TensorFlow のテンソルはイミュータブルでステートレスなオブジェクトです。しかしながら、機械学習モデルには変化する状態が必要です。モデルの訓練が進むにつれて、推論を行うおなじコードが異なる振る舞いをする必要があります(望むべくはより損失の少なくなるように)。この計算が進むにつれて変化する必要がある状態を表現するために、Python が状態を保つプログラミング言語であることを利用することができます。
###Code
# Python の状態を使う
x = tf.zeros([10, 10])
x += 2 # これは x = x + 2 と等価で, x の元の値を変えているわけではない
print(x)
###Output
_____no_output_____
###Markdown
TensorFlow にはステートフルな演算が組み込まれているので、状態を表現するのに低レベルの Python による表現を使うよりは簡単なことがしばしばあります。`tf.Variable`オブジェクトは値を保持し、何も指示しなくともこの保存された値を読み出します。TensorFlow の変数に保持された値を操作する演算(`tf.assign_sub`, `tf.scatter_update`, など)が用意されています。
###Code
v = tf.Variable(1.0)
# Python の `assert` を条件をテストするデバッグ文として使用
assert v.numpy() == 1.0
# `v` に値を再代入
v.assign(3.0)
assert v.numpy() == 3.0
# `v` に TensorFlow の `tf.square()` 演算を適用し再代入
v.assign(tf.square(v))
assert v.numpy() == 9.0
###Output
_____no_output_____
###Markdown
`tf.Variable`を使った計算は、勾配計算の際に自動的にトレースされます。埋め込みを表す変数では、TensorFlow は既定でスパースな更新を行います。これは計算量やメモリ使用量においてより効率的です。`tf.Variable`はあなたのコードを読む人にその状態の一部がミュータブルであることを示す方法でもあります。 線形モデルの適合これまでに学んだ `Tensor`、 `Variable`、 そして `GradientTape`という概念を使って、簡単なモデルの構築と訓練を行ってみましょう。通常、これには次のようないくつかの手順が含まれます。1. モデルの定義2. 損失関数の定義3. 訓練データの取得4. 訓練データを使って実行し、"optimizer" を使って変数をデータに適合ここでは、`f(x) = x * W + b`という簡単な線形モデルを作ります。このモデルには `W` (重み) と `b` (バイアス) の2つの変数があります。十分訓練されたモデルが `W = 3.0` と `b = 2.0` になるようなデータを人工的に作ります。 モデルの定義変数と計算をカプセル化する単純なクラスを定義してみましょう。
###Code
class Model(object):
def __init__(self):
# 重みを `5.0` に、バイアスを `0.0` に初期化
# 実際には、これらの値は乱数で初期化するべき(例えば `tf.random.normal` を使って)
self.W = tf.Variable(5.0)
self.b = tf.Variable(0.0)
def __call__(self, x):
return self.W * x + self.b
model = Model()
assert model(3.0).numpy() == 15.0
###Output
_____no_output_____
###Markdown
損失関数の定義損失関数は、ある入力値に対するモデルの出力がどれだけ出力の目的値に近いかを測るものです。訓練を通じて、この差異を最小化するのがゴールとなります。最小二乗誤差とも呼ばれる L2 損失を使ってみましょう。
###Code
def loss(predicted_y, target_y):
return tf.reduce_mean(tf.square(predicted_y - target_y))
###Output
_____no_output_____
###Markdown
訓練データの取得最初に、入力にランダムなガウス(正規)分布のノイズを加えることで、訓練用データを生成します。
###Code
TRUE_W = 3.0
TRUE_b = 2.0
NUM_EXAMPLES = 1000
inputs = tf.random.normal(shape=[NUM_EXAMPLES])
noise = tf.random.normal(shape=[NUM_EXAMPLES])
outputs = inputs * TRUE_W + TRUE_b + noise
###Output
_____no_output_____
###Markdown
モデルを訓練する前に、モデルの予測値を赤で、訓練データを青でプロットすることで、損失を可視化します。
###Code
import matplotlib.pyplot as plt
plt.scatter(inputs, outputs, c='b')
plt.scatter(inputs, model(inputs), c='r')
plt.show()
print('Current loss: %1.6f' % loss(model(inputs), outputs).numpy())
###Output
_____no_output_____
###Markdown
訓練ループの定義ネットワークと訓練データが準備できたところで、損失が少なくなるように、重み変数 (`W`) とバイアス変数 (`b`) を更新するために、[gradient descent (勾配降下法)](https://en.wikipedia.org/wiki/Gradient_descent) を使ってモデルを訓練します。勾配降下法にはさまざまな変種があり、我々の推奨する実装である `tf.train.Optimizer` にも含まれています。しかし、ここでは基本原理から構築するという精神で、自動微分を行う `tf.GradientTape` と、値を減少させる `tf.assign_sub` (これは、`tf.assign` と `tf.sub` の組み合わせですが)の力を借りて、この基本計算を実装してみましょう。
###Code
def train(model, inputs, outputs, learning_rate):
with tf.GradientTape() as t:
current_loss = loss(model(inputs), outputs)
dW, db = t.gradient(current_loss, [model.W, model.b])
model.W.assign_sub(learning_rate * dW)
model.b.assign_sub(learning_rate * db)
###Output
_____no_output_____
###Markdown
最後に、訓練データ全体に対して繰り返し実行し、`W` と `b` がどのように変化するかを見てみましょう。
###Code
model = Model()
# 後ほどプロットするために、W 値と b 値の履歴を集める
Ws, bs = [], []
epochs = range(10)
for epoch in epochs:
Ws.append(model.W.numpy())
bs.append(model.b.numpy())
current_loss = loss(model(inputs), outputs)
train(model, inputs, outputs, learning_rate=0.1)
print('Epoch %2d: W=%1.2f b=%1.2f, loss=%2.5f' %
(epoch, Ws[-1], bs[-1], current_loss))
# すべてをプロット
plt.plot(epochs, Ws, 'r',
epochs, bs, 'b')
plt.plot([TRUE_W] * len(epochs), 'r--',
[TRUE_b] * len(epochs), 'b--')
plt.legend(['W', 'b', 'True W', 'True b'])
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
カスタム訓練:基本 View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook 前のチュートリアルでは、機械学習の基本構成ブロックの1つである自動微分について TensorFlow の API を学習しました。このチュートリアルでは、これまでのチュートリアルに出てきた TensorFlow の基本要素を使って、単純な機械学習を実行します。TensorFlow には `tf.keras` が含まれています。`tf.keras`は、抽象化により決まり切った記述を削減し、柔軟さと性能を犠牲にすることなく TensorFlow をやさしく使えるようにする、高度なニューラルネットワーク API です。開発には [tf.Keras API](../../guide/keras/overview.ipynb) を使うことを強くおすすめします。しかしながら、この短いチュートリアルでは、しっかりした基礎を身につけていただくために、ニューラルネットワークの訓練についていちから学ぶことにします。 設定
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
変数TensorFlow のテンソルはイミュータブルでステートレスなオブジェクトです。しかしながら、機械学習モデルには変化する状態が必要です。モデルの訓練が進むにつれて、推論を行うおなじコードが異なる振る舞いをする必要があります(望むべくはより損失の少なくなるように)。この計算が進むにつれて変化する必要がある状態を表現するために、Python が状態を保つプログラミング言語であることを利用することができます。
###Code
# Python の状態を使う
x = tf.zeros([10, 10])
x += 2 # これは x = x + 2 と等価で, x の元の値を変えているわけではない
print(x)
###Output
_____no_output_____
###Markdown
TensorFlow にはステートフルな演算が組み込まれているので、状態を表現するのに低レベルの Python による表現を使うよりは簡単なことがしばしばあります。`tf.Variable`オブジェクトは値を保持し、何も指示しなくともこの保存された値を読み出します。TensorFlow の変数に保持された値を操作する演算(`tf.assign_sub`, `tf.scatter_update`, など)が用意されています。
###Code
v = tf.Variable(1.0)
# Python の `assert` を条件をテストするデバッグ文として使用
assert v.numpy() == 1.0
# `v` に値を再代入
v.assign(3.0)
assert v.numpy() == 3.0
# `v` に TensorFlow の `tf.square()` 演算を適用し再代入
v.assign(tf.square(v))
assert v.numpy() == 9.0
###Output
_____no_output_____
###Markdown
`tf.Variable`を使った計算は、勾配計算の際に自動的にトレースされます。埋め込みを表す変数では、TensorFlow は既定でスパースな更新を行います。これは計算量やメモリ使用量においてより効率的です。`tf.Variable`はあなたのコードを読む人にその状態の一部がミュータブルであることを示す方法でもあります。 線形モデルの適合これまでに学んだ `Tensor`、 `Variable`、 そして `GradientTape`という概念を使って、簡単なモデルの構築と訓練を行ってみましょう。通常、これには次のようないくつかの手順が含まれます。1. モデルの定義2. 損失関数の定義3. 訓練データの取得4. 訓練データを使って実行し、"optimizer" を使って変数をデータに適合ここでは、`f(x) = x * W + b`という簡単な線形モデルを作ります。このモデルには `W` (重み) と `b` (バイアス) の2つの変数があります。十分訓練されたモデルが `W = 3.0` と `b = 2.0` になるようなデータを人工的に作ります。 モデルの定義変数と計算をカプセル化する単純なクラスを定義してみましょう。
###Code
class Model(object):
def __init__(self):
# 重みを `5.0` に、バイアスを `0.0` に初期化
# 実際には、これらの値は乱数で初期化するべき(例えば `tf.random.normal` を使って)
self.W = tf.Variable(5.0)
self.b = tf.Variable(0.0)
def __call__(self, x):
return self.W * x + self.b
model = Model()
assert model(3.0).numpy() == 15.0
###Output
_____no_output_____
###Markdown
損失関数の定義損失関数は、ある入力値に対するモデルの出力がどれだけ出力の目的値に近いかを測るものです。訓練を通じて、この差異を最小化するのがゴールとなります。最小二乗誤差とも呼ばれる L2 損失を使ってみましょう。
###Code
def loss(predicted_y, target_y):
return tf.reduce_mean(tf.square(predicted_y - target_y))
###Output
_____no_output_____
###Markdown
訓練データの取得最初に、入力にランダムなガウス(正規)分布のノイズを加えることで、訓練用データを生成します。
###Code
TRUE_W = 3.0
TRUE_b = 2.0
NUM_EXAMPLES = 1000
inputs = tf.random.normal(shape=[NUM_EXAMPLES])
noise = tf.random.normal(shape=[NUM_EXAMPLES])
outputs = inputs * TRUE_W + TRUE_b + noise
###Output
_____no_output_____
###Markdown
モデルを訓練する前に、モデルの予測値を赤で、訓練データを青でプロットすることで、損失を可視化します。
###Code
import matplotlib.pyplot as plt
plt.scatter(inputs, outputs, c='b')
plt.scatter(inputs, model(inputs), c='r')
plt.show()
print('Current loss: %1.6f' % loss(model(inputs), outputs).numpy())
###Output
_____no_output_____
###Markdown
訓練ループの定義ネットワークと訓練データが準備できたところで、損失が少なくなるように、重み変数 (`W`) とバイアス変数 (`b`) を更新するために、[gradient descent (勾配降下法)](https://en.wikipedia.org/wiki/Gradient_descent) を使ってモデルを訓練します。勾配降下法にはさまざまな変種があり、我々の推奨する実装である `tf.train.Optimizer` にも含まれています。しかし、ここでは基本原理から構築するという精神で、自動微分を行う `tf.GradientTape` と、値を減少させる `tf.assign_sub` (これは、`tf.assign` と `tf.sub` の組み合わせですが)の力を借りて、この基本計算を実装してみましょう。
###Code
def train(model, inputs, outputs, learning_rate):
with tf.GradientTape() as t:
current_loss = loss(model(inputs), outputs)
dW, db = t.gradient(current_loss, [model.W, model.b])
model.W.assign_sub(learning_rate * dW)
model.b.assign_sub(learning_rate * db)
###Output
_____no_output_____
###Markdown
最後に、訓練データ全体に対して繰り返し実行し、`W` と `b` がどのように変化するかを見てみましょう。
###Code
model = Model()
# 後ほどプロットするために、W 値と b 値の履歴を集める
Ws, bs = [], []
epochs = range(10)
for epoch in epochs:
Ws.append(model.W.numpy())
bs.append(model.b.numpy())
current_loss = loss(model(inputs), outputs)
train(model, inputs, outputs, learning_rate=0.1)
print('Epoch %2d: W=%1.2f b=%1.2f, loss=%2.5f' %
(epoch, Ws[-1], bs[-1], current_loss))
# すべてをプロット
plt.plot(epochs, Ws, 'r',
epochs, bs, 'b')
plt.plot([TRUE_W] * len(epochs), 'r--',
[TRUE_b] * len(epochs), 'b--')
plt.legend(['W', 'b', 'True W', 'True b'])
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
カスタム訓練:基本 View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook 前のチュートリアルでは、機械学習の基本構成ブロックの1つである自動微分について TensorFlow の API を学習しました。このチュートリアルでは、これまでのチュートリアルに出てきた TensorFlow の基本要素を使って、単純な機械学習を実行します。TensorFlow には `tf.keras` が含まれています。`tf.keras`は、抽象化により決まり切った記述を削減し、柔軟さと性能を犠牲にすることなく TensorFlow をやさしく使えるようにする、高度なニューラルネットワーク API です。開発には [tf.Keras API](../../guide/keras/overview.ipynb) を使うことを強くおすすめします。しかしながら、この短いチュートリアルでは、しっかりした基礎を身につけていただくために、ニューラルネットワークの訓練についていちから学ぶことにします。 設定
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
###Output
_____no_output_____
###Markdown
変数TensorFlow のテンソルはイミュータブルでステートレスなオブジェクトです。しかしながら、機械学習モデルには変化する状態が必要です。モデルの訓練が進むにつれて、推論を行うおなじコードが異なる振る舞いをする必要があります(望むべくはより損失の少なくなるように)。この計算が進むにつれて変化する必要がある状態を表現するために、Python が状態を保つプログラミング言語であることを利用することができます。
###Code
# Python の状態を使う
x = tf.zeros([10, 10])
x += 2 # これは x = x + 2 と等価で, x の元の値を変えているわけではない
print(x)
###Output
_____no_output_____
###Markdown
TensorFlow にはステートフルな演算が組み込まれているので、状態を表現するのに低レベルの Python による表現を使うよりは簡単なことがしばしばあります。`tf.Variable`オブジェクトは値を保持し、何も指示しなくともこの保存された値を読み出します。TensorFlow の変数に保持された値を操作する演算(`tf.assign_sub`, `tf.scatter_update`, など)が用意されています。
###Code
v = tf.Variable(1.0)
# Python の `assert` を条件をテストするデバッグ文として使用
assert v.numpy() == 1.0
# `v` に値を再代入
v.assign(3.0)
assert v.numpy() == 3.0
# `v` に TensorFlow の `tf.square()` 演算を適用し再代入
v.assign(tf.square(v))
assert v.numpy() == 9.0
###Output
_____no_output_____
###Markdown
`tf.Variable`を使った計算は、勾配計算の際に自動的にトレースされます。埋め込みを表す変数では、TensorFlow は既定でスパースな更新を行います。これは計算量やメモリ使用量においてより効率的です。`tf.Variable`はあなたのコードを読む人にその状態の一部がミュータブルであることを示す方法でもあります。 線形モデルの適合これまでに学んだ `Tensor`、 `Variable`、 そして `GradientTape`という概念を使って、簡単なモデルの構築と訓練を行ってみましょう。通常、これには次のようないくつかの手順が含まれます。1. モデルの定義2. 損失関数の定義3. 訓練データの取得4. 訓練データを使って実行し、"optimizer" を使って変数をデータに適合ここでは、`f(x) = x * W + b`という簡単な線形モデルを作ります。このモデルには `W` (重み) と `b` (バイアス) の2つの変数があります。十分訓練されたモデルが `W = 3.0` と `b = 2.0` になるようなデータを人工的に作ります。 モデルの定義変数と計算をカプセル化する単純なクラスを定義してみましょう。
###Code
class Model(object):
def __init__(self):
# 重みを `5.0` に、バイアスを `0.0` に初期化
# 実際には、これらの値は乱数で初期化するべき(例えば `tf.random.normal` を使って)
self.W = tf.Variable(5.0)
self.b = tf.Variable(0.0)
def __call__(self, x):
return self.W * x + self.b
model = Model()
assert model(3.0).numpy() == 15.0
###Output
_____no_output_____
###Markdown
損失関数の定義損失関数は、ある入力値に対するモデルの出力がどれだけ出力の目的値に近いかを測るものです。訓練を通じて、この差異を最小化するのがゴールとなります。最小二乗誤差とも呼ばれる L2 損失を使ってみましょう。
###Code
def loss(predicted_y, target_y):
return tf.reduce_mean(tf.square(predicted_y - target_y))
###Output
_____no_output_____
###Markdown
訓練データの取得最初に、入力にランダムなガウス(正規)分布のノイズを加えることで、訓練用データを生成します。
###Code
TRUE_W = 3.0
TRUE_b = 2.0
NUM_EXAMPLES = 1000
inputs = tf.random.normal(shape=[NUM_EXAMPLES])
noise = tf.random.normal(shape=[NUM_EXAMPLES])
outputs = inputs * TRUE_W + TRUE_b + noise
###Output
_____no_output_____
###Markdown
モデルを訓練する前に、モデルの予測値を赤で、訓練データを青でプロットすることで、損失を可視化します。
###Code
import matplotlib.pyplot as plt
plt.scatter(inputs, outputs, c='b')
plt.scatter(inputs, model(inputs), c='r')
plt.show()
print('Current loss: %1.6f' % loss(model(inputs), outputs).numpy())
###Output
_____no_output_____
###Markdown
訓練ループの定義ネットワークと訓練データが準備できたところで、損失が少なくなるように、重み変数 (`W`) とバイアス変数 (`b`) を更新するために、[gradient descent (勾配降下法)](https://en.wikipedia.org/wiki/Gradient_descent) を使ってモデルを訓練します。勾配降下法にはさまざまな変種があり、我々の推奨する実装である `tf.train.Optimizer` にも含まれています。しかし、ここでは基本原理から構築するという精神で、自動微分を行う `tf.GradientTape` と、値を減少させる `tf.assign_sub` (これは、`tf.assign` と `tf.sub` の組み合わせですが)の力を借りて、この基本計算を実装してみましょう。
###Code
def train(model, inputs, outputs, learning_rate):
with tf.GradientTape() as t:
current_loss = loss(model(inputs), outputs)
dW, db = t.gradient(current_loss, [model.W, model.b])
model.W.assign_sub(learning_rate * dW)
model.b.assign_sub(learning_rate * db)
###Output
_____no_output_____
###Markdown
最後に、訓練データ全体に対して繰り返し実行し、`W` と `b` がどのように変化するかを見てみましょう。
###Code
model = Model()
# 後ほどプロットするために、W 値と b 値の履歴を集める
Ws, bs = [], []
epochs = range(10)
for epoch in epochs:
Ws.append(model.W.numpy())
bs.append(model.b.numpy())
current_loss = loss(model(inputs), outputs)
train(model, inputs, outputs, learning_rate=0.1)
print('Epoch %2d: W=%1.2f b=%1.2f, loss=%2.5f' %
(epoch, Ws[-1], bs[-1], current_loss))
# すべてをプロット
plt.plot(epochs, Ws, 'r',
epochs, bs, 'b')
plt.plot([TRUE_W] * len(epochs), 'r--',
[TRUE_b] * len(epochs), 'b--')
plt.legend(['W', 'b', 'True W', 'True b'])
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
カスタム訓練:基本 View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook 前のチュートリアルでは、機械学習の基本構成ブロックの1つである自動微分について TensorFlow の API を学習しました。このチュートリアルでは、これまでのチュートリアルに出てきた TensorFlow の基本要素を使って、単純な機械学習を実行します。TensorFlow には `tf.keras` が含まれています。`tf.keras`は、抽象化により決まり切った記述を削減し、柔軟さと性能を犠牲にすることなく TensorFlow をやさしく使えるようにする、高度なニューラルネットワーク API です。開発には [tf.Keras API](../../guide/keras/overview.ipynb) を使うことを強くおすすめします。しかしながら、この短いチュートリアルでは、しっかりした基礎を身につけていただくために、ニューラルネットワークの訓練についていちから学ぶことにします。 設定
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
###Output
_____no_output_____
###Markdown
変数TensorFlow のテンソルはイミュータブルでステートレスなオブジェクトです。しかしながら、機械学習モデルには変化する状態が必要です。モデルの訓練が進むにつれて、推論を行うおなじコードが異なる振る舞いをする必要があります(望むべくはより損失の少なくなるように)。この計算が進むにつれて変化する必要がある状態を表現するために、Python が状態を保つプログラミング言語であることを利用することができます。
###Code
# Python の状態を使う
x = tf.zeros([10, 10])
x += 2 # これは x = x + 2 と等価で, x の元の値を変えているわけではない
print(x)
###Output
_____no_output_____
###Markdown
TensorFlow にはステートフルな演算が組み込まれているので、状態を表現するのに低レベルの Python による表現を使うよりは簡単なことがしばしばあります。`tf.Variable`オブジェクトは値を保持し、何も指示しなくともこの保存された値を読み出します。TensorFlow の変数に保持された値を操作する演算(`tf.assign_sub`, `tf.scatter_update`, など)が用意されています。
###Code
v = tf.Variable(1.0)
# Python の `assert` を条件をテストするデバッグ文として使用
assert v.numpy() == 1.0
# `v` に値を再代入
v.assign(3.0)
assert v.numpy() == 3.0
# `v` に TensorFlow の `tf.square()` 演算を適用し再代入
v.assign(tf.square(v))
assert v.numpy() == 9.0
###Output
_____no_output_____
###Markdown
`tf.Variable`を使った計算は、勾配計算の際に自動的にトレースされます。埋め込みを表す変数では、TensorFlow は既定でスパースな更新を行います。これは計算量やメモリ使用量においてより効率的です。`tf.Variable`はあなたのコードを読む人にその状態の一部がミュータブルであることを示す方法でもあります。 線形モデルの適合これまでに学んだ `Tensor`、 `Variable`、 そして `GradientTape`という概念を使って、簡単なモデルの構築と訓練を行ってみましょう。通常、これには次のようないくつかの手順が含まれます。1. モデルの定義2. 損失関数の定義3. 訓練データの取得4. 訓練データを使って実行し、"optimizer" を使って変数をデータに適合ここでは、`f(x) = x * W + b`という簡単な線形モデルを作ります。このモデルには `W` (重み) と `b` (バイアス) の2つの変数があります。十分訓練されたモデルが `W = 3.0` と `b = 2.0` になるようなデータを人工的に作ります。 モデルの定義変数と計算をカプセル化する単純なクラスを定義してみましょう。
###Code
class Model(object):
def __init__(self):
# 重みを `5.0` に、バイアスを `0.0` に初期化
# 実際には、これらの値は乱数で初期化するべき(例えば `tf.random.normal` を使って)
self.W = tf.Variable(5.0)
self.b = tf.Variable(0.0)
def __call__(self, x):
return self.W * x + self.b
model = Model()
assert model(3.0).numpy() == 15.0
###Output
_____no_output_____
###Markdown
損失関数の定義損失関数は、ある入力値に対するモデルの出力がどれだけ出力の目的値に近いかを測るものです。訓練を通じて、この差異を最小化するのがゴールとなります。最小二乗誤差とも呼ばれる L2 損失を使ってみましょう。
###Code
def loss(predicted_y, target_y):
return tf.reduce_mean(tf.square(predicted_y - target_y))
###Output
_____no_output_____
###Markdown
訓練データの取得最初に、入力にランダムなガウス(正規)分布のノイズを加えることで、訓練用データを生成します。
###Code
TRUE_W = 3.0
TRUE_b = 2.0
NUM_EXAMPLES = 1000
inputs = tf.random.normal(shape=[NUM_EXAMPLES])
noise = tf.random.normal(shape=[NUM_EXAMPLES])
outputs = inputs * TRUE_W + TRUE_b + noise
###Output
_____no_output_____
###Markdown
モデルを訓練する前に、モデルの予測値を赤で、訓練データを青でプロットすることで、損失を可視化します。
###Code
import matplotlib.pyplot as plt
plt.scatter(inputs, outputs, c='b')
plt.scatter(inputs, model(inputs), c='r')
plt.show()
print('Current loss: %1.6f' % loss(model(inputs), outputs).numpy())
###Output
_____no_output_____
###Markdown
訓練ループの定義ネットワークと訓練データが準備できたところで、損失が少なくなるように、重み変数 (`W`) とバイアス変数 (`b`) を更新するために、[gradient descent (勾配降下法)](https://en.wikipedia.org/wiki/Gradient_descent) を使ってモデルを訓練します。勾配降下法にはさまざまな変種があり、我々の推奨する実装である `tf.train.Optimizer` にも含まれています。しかし、ここでは基本原理から構築するという精神で、自動微分を行う `tf.GradientTape` と、値を減少させる `tf.assign_sub` (これは、`tf.assign` と `tf.sub` の組み合わせですが)の力を借りて、この基本計算を実装してみましょう。
###Code
def train(model, inputs, outputs, learning_rate):
with tf.GradientTape() as t:
current_loss = loss(model(inputs), outputs)
dW, db = t.gradient(current_loss, [model.W, model.b])
model.W.assign_sub(learning_rate * dW)
model.b.assign_sub(learning_rate * db)
###Output
_____no_output_____
###Markdown
最後に、訓練データ全体に対して繰り返し実行し、`W` と `b` がどのように変化するかを見てみましょう。
###Code
model = Model()
# 後ほどプロットするために、W 値と b 値の履歴を集める
Ws, bs = [], []
epochs = range(10)
for epoch in epochs:
Ws.append(model.W.numpy())
bs.append(model.b.numpy())
current_loss = loss(model(inputs), outputs)
train(model, inputs, outputs, learning_rate=0.1)
print('Epoch %2d: W=%1.2f b=%1.2f, loss=%2.5f' %
(epoch, Ws[-1], bs[-1], current_loss))
# すべてをプロット
plt.plot(epochs, Ws, 'r',
epochs, bs, 'b')
plt.plot([TRUE_W] * len(epochs), 'r--',
[TRUE_b] * len(epochs), 'b--')
plt.legend(['W', 'b', 'True W', 'True b'])
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
カスタム訓練:基本 View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook 前のチュートリアルでは、機械学習の基本構成ブロックの1つである自動微分について TensorFlow の API を学習しました。このチュートリアルでは、これまでのチュートリアルに出てきた TensorFlow の基本要素を使って、単純な機械学習を実行します。TensorFlow には `tf.keras` が含まれています。`tf.keras`は、抽象化により決まり切った記述を削減し、柔軟さと性能を犠牲にすることなく TensorFlow をやさしく使えるようにする、高度なニューラルネットワーク API です。開発には [tf.Keras API](../../guide/keras/overview.ipynb) を使うことを強くおすすめします。しかしながら、この短いチュートリアルでは、しっかりした基礎を身につけていただくために、ニューラルネットワークの訓練についていちから学ぶことにします。 設定
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
変数TensorFlow のテンソルはイミュータブルでステートレスなオブジェクトです。しかしながら、機械学習モデルには変化する状態が必要です。モデルの訓練が進むにつれて、推論を行うおなじコードが異なる振る舞いをする必要があります(望むべくはより損失の少なくなるように)。この計算が進むにつれて変化する必要がある状態を表現するために、Python が状態を保つプログラミング言語であることを利用することができます。
###Code
# Python の状態を使う
x = tf.zeros([10, 10])
x += 2 # これは x = x + 2 と等価で, x の元の値を変えているわけではない
print(x)
###Output
_____no_output_____
###Markdown
TensorFlow にはステートフルな演算が組み込まれているので、状態を表現するのに低レベルの Python による表現を使うよりは簡単なことがしばしばあります。`tf.Variable`オブジェクトは値を保持し、何も指示しなくともこの保存された値を読み出します。TensorFlow の変数に保持された値を操作する演算(`tf.assign_sub`, `tf.scatter_update`, など)が用意されています。
###Code
v = tf.Variable(1.0)
# Python の `assert` を条件をテストするデバッグ文として使用
assert v.numpy() == 1.0
# `v` に値を再代入
v.assign(3.0)
assert v.numpy() == 3.0
# `v` に TensorFlow の `tf.square()` 演算を適用し再代入
v.assign(tf.square(v))
assert v.numpy() == 9.0
###Output
_____no_output_____
###Markdown
`tf.Variable`を使った計算は、勾配計算の際に自動的にトレースされます。埋め込みを表す変数では、TensorFlow は既定でスパースな更新を行います。これは計算量やメモリ使用量においてより効率的です。`tf.Variable`はあなたのコードを読む人にその状態の一部がミュータブルであることを示す方法でもあります。 線形モデルの適合これまでに学んだ `Tensor`、 `Variable`、 そして `GradientTape`という概念を使って、簡単なモデルの構築と訓練を行ってみましょう。通常、これには次のようないくつかの手順が含まれます。1. モデルの定義2. 損失関数の定義3. 訓練データの取得4. 訓練データを使って実行し、"optimizer" を使って変数をデータに適合ここでは、`f(x) = x * W + b`という簡単な線形モデルを作ります。このモデルには `W` (重み) と `b` (バイアス) の2つの変数があります。十分訓練されたモデルが `W = 3.0` と `b = 2.0` になるようなデータを人工的に作ります。 モデルの定義変数と計算をカプセル化する単純なクラスを定義してみましょう。
###Code
class Model(object):
def __init__(self):
# 重みを `5.0` に、バイアスを `0.0` に初期化
# 実際には、これらの値は乱数で初期化するべき(例えば `tf.random.normal` を使って)
self.W = tf.Variable(5.0)
self.b = tf.Variable(0.0)
def __call__(self, x):
return self.W * x + self.b
model = Model()
assert model(3.0).numpy() == 15.0
###Output
_____no_output_____
###Markdown
損失関数の定義損失関数は、ある入力値に対するモデルの出力がどれだけ出力の目的値に近いかを測るものです。訓練を通じて、この差異を最小化するのがゴールとなります。最小二乗誤差とも呼ばれる L2 損失を使ってみましょう。
###Code
def loss(predicted_y, target_y):
return tf.reduce_mean(tf.square(predicted_y - target_y))
###Output
_____no_output_____
###Markdown
訓練データの取得最初に、入力にランダムなガウス(正規)分布のノイズを加えることで、訓練用データを生成します。
###Code
TRUE_W = 3.0
TRUE_b = 2.0
NUM_EXAMPLES = 1000
inputs = tf.random.normal(shape=[NUM_EXAMPLES])
noise = tf.random.normal(shape=[NUM_EXAMPLES])
outputs = inputs * TRUE_W + TRUE_b + noise
###Output
_____no_output_____
###Markdown
モデルを訓練する前に、モデルの予測値を赤で、訓練データを青でプロットすることで、損失を可視化します。
###Code
import matplotlib.pyplot as plt
plt.scatter(inputs, outputs, c='b')
plt.scatter(inputs, model(inputs), c='r')
plt.show()
print('Current loss: %1.6f' % loss(model(inputs), outputs).numpy())
###Output
_____no_output_____
###Markdown
訓練ループの定義ネットワークと訓練データが準備できたところで、損失が少なくなるように、重み変数 (`W`) とバイアス変数 (`b`) を更新するために、[gradient descent (勾配降下法)](https://en.wikipedia.org/wiki/Gradient_descent) を使ってモデルを訓練します。勾配降下法にはさまざまな変種があり、我々の推奨する実装である `tf.train.Optimizer` にも含まれています。しかし、ここでは基本原理から構築するという精神で、自動微分を行う `tf.GradientTape` と、値を減少させる `tf.assign_sub` (これは、`tf.assign` と `tf.sub` の組み合わせですが)の力を借りて、この基本計算を実装してみましょう。
###Code
def train(model, inputs, outputs, learning_rate):
with tf.GradientTape() as t:
current_loss = loss(model(inputs), outputs)
dW, db = t.gradient(current_loss, [model.W, model.b])
model.W.assign_sub(learning_rate * dW)
model.b.assign_sub(learning_rate * db)
###Output
_____no_output_____
###Markdown
最後に、訓練データ全体に対して繰り返し実行し、`W` と `b` がどのように変化するかを見てみましょう。
###Code
model = Model()
# 後ほどプロットするために、W 値と b 値の履歴を集める
Ws, bs = [], []
epochs = range(10)
for epoch in epochs:
Ws.append(model.W.numpy())
bs.append(model.b.numpy())
current_loss = loss(model(inputs), outputs)
train(model, inputs, outputs, learning_rate=0.1)
print('Epoch %2d: W=%1.2f b=%1.2f, loss=%2.5f' %
(epoch, Ws[-1], bs[-1], current_loss))
# すべてをプロット
plt.plot(epochs, Ws, 'r',
epochs, bs, 'b')
plt.plot([TRUE_W] * len(epochs), 'r--',
[TRUE_b] * len(epochs), 'b--')
plt.legend(['W', 'b', 'True W', 'True b'])
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
カスタム訓練:基本 View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook 前のチュートリアルでは、機械学習の基本構成ブロックの1つである自動微分について TensorFlow の API を学習しました。このチュートリアルでは、これまでのチュートリアルに出てきた TensorFlow の基本要素を使って、単純な機械学習を実行します。TensorFlow には `tf.keras` が含まれています。`tf.keras`は、抽象化により決まり切った記述を削減し、柔軟さと性能を犠牲にすることなく TensorFlow をやさしく使えるようにする、高度なニューラルネットワーク API です。開発には [tf.Keras API](../../guide/keras/overview.ipynb) を使うことを強くおすすめします。しかしながら、この短いチュートリアルでは、しっかりした基礎を身につけていただくために、ニューラルネットワークの訓練についていちから学ぶことにします。 設定
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
###Output
_____no_output_____
###Markdown
変数TensorFlow のテンソルはイミュータブルでステートレスなオブジェクトです。しかしながら、機械学習モデルには変化する状態が必要です。モデルの訓練が進むにつれて、推論を行うおなじコードが異なる振る舞いをする必要があります(望むべくはより損失の少なくなるように)。この計算が進むにつれて変化する必要がある状態を表現するために、Python が状態を保つプログラミング言語であることを利用することができます。
###Code
# Python の状態を使う
x = tf.zeros([10, 10])
x += 2 # これは x = x + 2 と等価で, x の元の値を変えているわけではない
print(x)
###Output
_____no_output_____
###Markdown
TensorFlow にはステートフルな演算が組み込まれているので、状態を表現するのに低レベルの Python による表現を使うよりは簡単なことがしばしばあります。`tf.Variable`オブジェクトは値を保持し、何も指示しなくともこの保存された値を読み出します。TensorFlow の変数に保持された値を操作する演算(`tf.assign_sub`, `tf.scatter_update`, など)が用意されています。
###Code
v = tf.Variable(1.0)
# Python の `assert` を条件をテストするデバッグ文として使用
assert v.numpy() == 1.0
# `v` に値を再代入
v.assign(3.0)
assert v.numpy() == 3.0
# `v` に TensorFlow の `tf.square()` 演算を適用し再代入
v.assign(tf.square(v))
assert v.numpy() == 9.0
###Output
_____no_output_____
###Markdown
`tf.Variable`を使った計算は、勾配計算の際に自動的にトレースされます。埋め込みを表す変数では、TensorFlow は既定でスパースな更新を行います。これは計算量やメモリ使用量においてより効率的です。`tf.Variable`はあなたのコードを読む人にその状態の一部がミュータブルであることを示す方法でもあります。 線形モデルの適合これまでに学んだ `Tensor`、 `Variable`、 そして `GradientTape`という概念を使って、簡単なモデルの構築と訓練を行ってみましょう。通常、これには次のようないくつかの手順が含まれます。1. モデルの定義2. 損失関数の定義3. 訓練データの取得4. 訓練データを使って実行し、"optimizer" を使って変数をデータに適合ここでは、`f(x) = x * W + b`という簡単な線形モデルを作ります。このモデルには `W` (重み) と `b` (バイアス) の2つの変数があります。十分訓練されたモデルが `W = 3.0` と `b = 2.0` になるようなデータを人工的に作ります。 モデルの定義変数と計算をカプセル化する単純なクラスを定義してみましょう。
###Code
class Model(object):
def __init__(self):
# 重みを `5.0` に、バイアスを `0.0` に初期化
# 実際には、これらの値は乱数で初期化するべき(例えば `tf.random.normal` を使って)
self.W = tf.Variable(5.0)
self.b = tf.Variable(0.0)
def __call__(self, x):
return self.W * x + self.b
model = Model()
assert model(3.0).numpy() == 15.0
###Output
_____no_output_____
###Markdown
損失関数の定義損失関数は、ある入力値に対するモデルの出力がどれだけ出力の目的値に近いかを測るものです。訓練を通じて、この差異を最小化するのがゴールとなります。最小二乗誤差とも呼ばれる L2 損失を使ってみましょう。
###Code
def loss(predicted_y, target_y):
return tf.reduce_mean(tf.square(predicted_y - target_y))
###Output
_____no_output_____
###Markdown
訓練データの取得最初に、入力にランダムなガウス(正規)分布のノイズを加えることで、訓練用データを生成します。
###Code
TRUE_W = 3.0
TRUE_b = 2.0
NUM_EXAMPLES = 1000
inputs = tf.random.normal(shape=[NUM_EXAMPLES])
noise = tf.random.normal(shape=[NUM_EXAMPLES])
outputs = inputs * TRUE_W + TRUE_b + noise
###Output
_____no_output_____
###Markdown
モデルを訓練する前に、モデルの予測値を赤で、訓練データを青でプロットすることで、損失を可視化します。
###Code
import matplotlib.pyplot as plt
plt.scatter(inputs, outputs, c='b')
plt.scatter(inputs, model(inputs), c='r')
plt.show()
print('Current loss: %1.6f' % loss(model(inputs), outputs).numpy())
###Output
_____no_output_____
###Markdown
訓練ループの定義ネットワークと訓練データが準備できたところで、損失が少なくなるように、重み変数 (`W`) とバイアス変数 (`b`) を更新するために、[gradient descent (勾配降下法)](https://en.wikipedia.org/wiki/Gradient_descent) を使ってモデルを訓練します。勾配降下法にはさまざまな変種があり、我々の推奨する実装である `tf.train.Optimizer` にも含まれています。しかし、ここでは基本原理から構築するという精神で、自動微分を行う `tf.GradientTape` と、値を減少させる `tf.assign_sub` (これは、`tf.assign` と `tf.sub` の組み合わせですが)の力を借りて、この基本計算を実装してみましょう。
###Code
def train(model, inputs, outputs, learning_rate):
with tf.GradientTape() as t:
current_loss = loss(model(inputs), outputs)
dW, db = t.gradient(current_loss, [model.W, model.b])
model.W.assign_sub(learning_rate * dW)
model.b.assign_sub(learning_rate * db)
###Output
_____no_output_____
###Markdown
最後に、訓練データ全体に対して繰り返し実行し、`W` と `b` がどのように変化するかを見てみましょう。
###Code
model = Model()
# 後ほどプロットするために、W 値と b 値の履歴を集める
Ws, bs = [], []
epochs = range(10)
for epoch in epochs:
Ws.append(model.W.numpy())
bs.append(model.b.numpy())
current_loss = loss(model(inputs), outputs)
train(model, inputs, outputs, learning_rate=0.1)
print('Epoch %2d: W=%1.2f b=%1.2f, loss=%2.5f' %
(epoch, Ws[-1], bs[-1], current_loss))
# すべてをプロット
plt.plot(epochs, Ws, 'r',
epochs, bs, 'b')
plt.plot([TRUE_W] * len(epochs), 'r--',
[TRUE_b] * len(epochs), 'b--')
plt.legend(['W', 'b', 'True W', 'True b'])
plt.show()
###Output
_____no_output_____ |
neural-networks/assignment3/coding_exercise-old.ipynb | ###Markdown
Exercise Sheet 3 Machine learning basics Deadline: 02.12.2020 23:59**Instructions:**Insert your code in the *TODO* sections ans type your answers in the *Answer* cells. Names and teams IDs:
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression, LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, log_loss
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import Pipeline
###Output
_____no_output_____
###Markdown
1 Implementing regression In this exercise we will practice implementing regression on the Titanic dataset using the package *sklearn*. Titanic dataset contains the data about passengers of the ship and information whether they survived or not. In the materials for this exercise you can find the file *titanic.csv*. This file contains preprocessed data with information about passenger ID, age, class, and *price* of their ticket. **1.1 Load the data as a pandas dataframe, using read_csv method**
###Code
# TODO: load the data into the varible 'titanic', have a look at the data
titanic = pd.read_csv('titanic.csv')
titanic.head()
###Output
_____no_output_____
###Markdown
**Look at the data and report which variables are continuous, nominal, ordinal. (0.5 points)** *Answer:* - **continuous**: A continious data represents measurements and therefore their values can’t be counted but they can be measured. For eg. Weight. It can take on every value on some range like my weight can vary from 60 Kg to 60.0001 and we get a new data. Examples of continious data are- weight, area, time. While some examples of discreet data are- grades, of numbers, money- **nominal**: Nominal data is used for naming or labelling variables.- **ordinal**:Ordinal data is a type of categorical data with an order. The variables in ordinal data are listed in an ordered manner. eg. medals in Olympic by all countries.Here we have-- continious variable- **Price**- nominal variables- **Pclass**, **Survived**- ordinal variables- **PassengerId**\\- continious variable- **PassengerId**, **Age**, **Price**- nominal variables- **Survived**- ordinal variables- **Pclass** **1.2 Here we will implement a simple linear regression and try to see if we can predict the *price* of the ticket based on the *age* of the passenger (0.5 points)** Consult the documentation on LinearRegression class in sklearn
###Code
# TODO:
# 1) create an instance of LinearRegression class
# 2) fit the model to predict Price of the ticket from Age of the passenger
# (consult the METHODS section in the documentation)
# Hint: it might be the case that you will have to reshape your data using .reshape(-1, 1).
# You can create separate numpy arrays containing only Age and Price and reshape them if needed.
lr = LinearRegression()
X = titanic["Age"].to_numpy().reshape(-1, 1) #Training data
y_true = titanic["Price"].to_numpy().reshape(-1, 1) #Target values
lr.fit(X, y_true) # It returns self, which is the variable model itself
y_pred = lr.predict(X)
###Output
_____no_output_____
###Markdown
**What are the parameters of the model that we fit? Hint: the parametrs are the attributes of the model, consult the documentation.**
###Code
# TODO: Get the parameters of the model
print("Model parameters-")
print("Slope", lr.coef_)
print("Intercept", lr.intercept_)
# print(tuple(zip(X, y_pred))[:10])
###Output
Model parameters-
Slope [[0.33511181]]
Intercept [8.77641078]
###Markdown
**1.3 Write the formula of the fitted regression. (0.5 points)** *Answer:* $$ y = \theta_0 +\theta_1 x \\y = 8.77641078 +0.33511181 x$$ **1.4 Let us see how good are the estimated values of the model. (0.5 points)** Write the formula for Mean Squared Error and calculate the value for our age~price model. Check if you calculated it correctly using the mean_squared_error method from sklearn.metrics *Answer (MSE formula)*: $$\text{MSE}(y, \hat{y}) = \frac{1}{n_\text{samples}} \sum_{i=0}^{n_\text{samples} - 1} (y_i - \hat{y}_i)^2.$$Where $y$ : true values \ $\hat{y}$: predicated values \ $n_{samples}$: total number of samples in the training data
###Code
# TODO:
# a) calculate mean squared error of our model
mse_np = np.sum(np.square(y_true - y_pred))/len(y_true)
print('Mean squared error using using formula: %.2f'%mse_np)
# b) check you answer using mean_squared_error method.
mse = mean_squared_error(y_true, y_pred)
print('Mean squared error using mean_squared_error method: %.2f'%mse)
###Output
Mean squared error using using formula: 433.04
Mean squared error using mean_squared_error method: 433.04
###Markdown
**1.5 Get predictions of your model (hint: there is a corresponding method) and plot them with the original data on the same graph. (1 point)**
###Code
#TODO:
# Plot original data and predictions on the same graph
plt.figure(num=None, figsize=(9.5, 6), dpi=100, facecolor='w', edgecolor='k')
plt.scatter(X, y_true, color='blue', s=10)
plt.plot(X, y_pred, color='red', linewidth=1)
plt.xlabel('Ticket price')
plt.ylabel('Passanger\'s Age')
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Is Age a good predictor for the Price of the ticket? Have a look at the data again. Is there a better predictor? **1.6 Choose another predictor and repeat the same steps (1.2-1.5). Report the better predictor. (0.5 points)**
###Code
# TODO: Choose another predictor and repeat the same steps
lr = LinearRegression()
X = titanic["Pclass"].to_numpy().reshape(-1, 1) #Training data
lr.fit(X, y_true) # It returns self, which is the variable model itself
y_pred = np.round(lr.predict(X))
print("Model parameters-")
print("Slope", lr.coef_)
print("Intercept", lr.intercept_)
# a) calculate mean squared error of our model
mse_np = np.sum(np.square(y_true - y_pred))/len(y_true)
print('Mean squared error using using formula: %.2f'%mse_np)
# b) check you answer using mean_squared_error method.
mse = mean_squared_error(y_true, y_pred)
print('Mean squared error using mean_squared_error method: %.2f'%mse)
plt.figure(num=None, figsize=(9.5, 6), dpi=100, facecolor='w', edgecolor='k')
plt.scatter(X, y_true, color='blue', s=10)
plt.plot(X, y_pred, color='red', linewidth=2)
plt.rcParams['figure.figsize'] = [9.5, 6]
plt.xlabel('Ticket price')
plt.ylabel('Passanger\'s Age')
plt.grid()
plt.show()
# Write why Pclass is a better prediction
###Output
Model parameters-
Slope [[-17.06934444]]
Intercept [56.90784762]
Mean squared error using using formula: 252.32
Mean squared error using mean_squared_error method: 252.32
###Markdown
**Pclass** is a better predictor for the price of the ticket. We can go on and try to improve the fit even more by increasing the complexity of the model.**1.7 Consult this Tutorial and fit polynomial regressions using the better predictor. (1.5 points)**1) Fit regressions of order 2, 5, and 10. 2) Get parameters of the models and write down the equations for each model inserting the fitted parameters. 3) Compute MSE for each model and compare them. Does increasing the capacity of the model improve its performance?
###Code
# TODO: Perform steps 1-3.
# Fit regression
X = titanic["Age"].to_numpy().reshape(-1, 1) #Training data
def poly_reg(deg, X, y_true):
Input = [('poly',PolynomialFeatures(degree=deg)),('lr',LinearRegression())]
pipe = Pipeline(Input)
pipe.fit(X,y_true)
mse = mean_squared_error(y_true, pipe.predict(X))
# print("params", pipe.named_steps.lr.coef_)
print("Poly reg order: %d"%deg)
reg_label = "Inliers coef:%s - b:%f" % (np.array2string(pipe.named_steps.lr.coef_, formatter={'float_kind': lambda fk: "%f" % fk}),pipe.named_steps.lr.intercept_)
# print("Intercept", pipe.named_steps.lr.intercept_)
print(reg_label)
print('MSE: %.2f\n'%mse)
poly_reg(2, X, y_true)
poly_reg(5, X, y_true)
poly_reg(10, X, y_true)
## Extra
# y_poly_pred = pipe_deg2.predict(X) # Predicted ouput- ticket price
sorted_zip = sorted(zip(X,pipe_deg2.predict(X)))
X2_poly, y2_poly_pred = zip(*sorted_zip)
sorted_zip = sorted(zip(X,pipe_deg2.predict(X)))
X5_poly, y5_poly_pred = zip(*sorted_zip)
sorted_zip = sorted(zip(X,pipe_deg10.predict(X)))
X10_poly, y10_poly_pred = zip(*sorted_zip)
plt.figure(num=None, figsize=(9.5, 6), dpi=100, facecolor='w', edgecolor='k')
plt.scatter(X, y_true, color='blue', s=10)
plt.plot(X, y_pred, color='red', linewidth=1)
plt.plot(X2_poly, y2_poly_pred, color='green', linewidth=2, label="Order=2")
plt.plot(X5_poly, y5_poly_pred, color='peru', linewidth=2, label="Order=5")
plt.plot(X10_poly, y10_poly_pred, color='orange', linewidth=2, label="Order=10")
plt.xlabel('Ticket price')
plt.ylabel('Passanger\'s Age')
plt.legend()
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Now we will try to predict if a passenger survived based on the passenger class. Whether a passenger survived or not is a categorical variable, so we have to implement a **logistic** regression. Logistic regression will be covered in the lecture on the 1st of December, but you can already get acquainted with it in this post. **1.8 Fit a logistic regression predicting if a passenger has survived based on their class. (0.5 points)**
###Code
# TODO: fit a logistic regression (Pclass predicts Survived)
X = titanic["Pclass"].to_numpy().reshape(-1, 1) #Training data
y_true = titanic["Survived"] #Target values
logisticRegr = LogisticRegression()
logisticRegr.fit(X, y_true.values.ravel())
y_pred = np.round(logisticRegr.predict(X)).reshape(-1, 1)
y_pred
plt.scatter(X, y_true.values.ravel(), color='blue', s=10, label="True classes")
plt.scatter(X, y_pred, color='red', s=10, label="Predicted classes")
plt.rcParams['figure.figsize'] = [9.5, 6]
# plt.xlabel('Ticket price')
# plt.ylabel('Passanger\'s Age')
plt.legend()
plt.grid()
plt.show()
# print(X.shape, y_true.shape, y_pred.shape)
# print(y_true.values.ravel()[:10],y_pred[:10])
y_true[:100]
###Output
_____no_output_____
###Markdown
**1.9 Cross entropy loss. (1 point)** The measure that we use for estimating the error of a logistic regression is *Cross Entropy Loss*. Here is a good video explaining Maximum Likelihood Estimation and Cross Entropy Loss. Write the formula for Cross Entropy Loss and calculate the error of your model using this formula. Check your answer using the log_loss method from sklearn. *Cross Entropy Loss formula*:
###Code
# TODO: compute Cross Entropy Loss and check it using log_loss method.
###Output
_____no_output_____
###Markdown
**1.10 Fit a multiple logistic regression (0.5 points)** Now let's check if the Age of a passenger also had an influence on their survival chances. Fit a model with 2 predictors, compute the loss. Compare with the previous model.
###Code
# TODO: fit a multiple regression with Age and Pclass as predictors of survival.
# Hint: the predictors should be in shape of a 2d array, Age and Pclass as columns.
###Output
_____no_output_____ |
notebooks/examples/diverging_stacked_bar_chart.ipynb | ###Markdown
Diverging Stacked Bar Chart---------------------------This example shows a diverging stacked bar chart for sentiments towards a set of eight questions, displayed as percentages with neutral responses straddling the 0% mark.
###Code
import altair as alt
alt.data_transformers.enable('json')
data = [
{
"question": "Question 1",
"type": "Strongly disagree",
"value": 24,
"percentage": 0.7,
"percentage_start": -19.1,
"percentage_end": -18.4
},
{
"question": "Question 1",
"type": "Disagree",
"value": 294,
"percentage": 9.1,
"percentage_start": -18.4,
"percentage_end": -9.2
},
{
"question": "Question 1",
"type": "Neither agree nor disagree",
"value": 594,
"percentage": 18.5,
"percentage_start": -9.2,
"percentage_end": 9.2
},
{
"question": "Question 1",
"type": "Agree",
"value": 1927,
"percentage": 59.9,
"percentage_start": 9.2,
"percentage_end": 69.2
},
{
"question": "Question 1",
"type": "Strongly agree",
"value": 376,
"percentage": 11.7,
"percentage_start": 69.2,
"percentage_end": 80.9
},
{
"question": "Question 2",
"type": "Strongly disagree",
"value": 2,
"percentage": 18.2,
"percentage_start": -36.4,
"percentage_end": -18.2
},
{
"question": "Question 2",
"type": "Disagree",
"value": 2,
"percentage": 18.2,
"percentage_start": -18.2,
"percentage_end": 0
},
{
"question": "Question 2",
"type": "Neither agree nor disagree",
"value": 0,
"percentage": 0,
"percentage_start": 0,
"percentage_end": 0
},
{
"question": "Question 2",
"type": "Agree",
"value": 7,
"percentage": 63.6,
"percentage_start": 0,
"percentage_end": 63.6
},
{
"question": "Question 2",
"type": "Strongly agree",
"value": 11,
"percentage": 0,
"percentage_start": 63.6,
"percentage_end": 63.6
},
{
"question": "Question 3",
"type": "Strongly disagree",
"value": 2,
"percentage": 20,
"percentage_start": -30,
"percentage_end": -10
},
{
"question": "Question 3",
"type": "Disagree",
"value": 0,
"percentage": 0,
"percentage_start": -10,
"percentage_end": -10
},
{
"question": "Question 3",
"type": "Neither agree nor disagree",
"value": 2,
"percentage": 20,
"percentage_start": -10,
"percentage_end": 10
},
{
"question": "Question 3",
"type": "Agree",
"value": 4,
"percentage": 40,
"percentage_start": 10,
"percentage_end": 50
},
{
"question": "Question 3",
"type": "Strongly agree",
"value": 2,
"percentage": 20,
"percentage_start": 50,
"percentage_end": 70
},
{
"question": "Question 4",
"type": "Strongly disagree",
"value": 0,
"percentage": 0,
"percentage_start": -15.6,
"percentage_end": -15.6
},
{
"question": "Question 4",
"type": "Disagree",
"value": 2,
"percentage": 12.5,
"percentage_start": -15.6,
"percentage_end": -3.1
},
{
"question": "Question 4",
"type": "Neither agree nor disagree",
"value": 1,
"percentage": 6.3,
"percentage_start": -3.1,
"percentage_end": 3.1
},
{
"question": "Question 4",
"type": "Agree",
"value": 7,
"percentage": 43.8,
"percentage_start": 3.1,
"percentage_end": 46.9
},
{
"question": "Question 4",
"type": "Strongly agree",
"value": 6,
"percentage": 37.5,
"percentage_start": 46.9,
"percentage_end": 84.4
},
{
"question": "Question 5",
"type": "Strongly disagree",
"value": 0,
"percentage": 0,
"percentage_start": -10.4,
"percentage_end": -10.4
},
{
"question": "Question 5",
"type": "Disagree",
"value": 1,
"percentage": 4.2,
"percentage_start": -10.4,
"percentage_end": -6.3
},
{
"question": "Question 5",
"type": "Neither agree nor disagree",
"value": 3,
"percentage": 12.5,
"percentage_start": -6.3,
"percentage_end": 6.3
},
{
"question": "Question 5",
"type": "Agree",
"value": 16,
"percentage": 66.7,
"percentage_start": 6.3,
"percentage_end": 72.9
},
{
"question": "Question 5",
"type": "Strongly agree",
"value": 4,
"percentage": 16.7,
"percentage_start": 72.9,
"percentage_end": 89.6
},
{
"question": "Question 6",
"type": "Strongly disagree",
"value": 1,
"percentage": 6.3,
"percentage_start": -18.8,
"percentage_end": -12.5
},
{
"question": "Question 6",
"type": "Disagree",
"value": 1,
"percentage": 6.3,
"percentage_start": -12.5,
"percentage_end": -6.3
},
{
"question": "Question 6",
"type": "Neither agree nor disagree",
"value": 2,
"percentage": 12.5,
"percentage_start": -6.3,
"percentage_end": 6.3
},
{
"question": "Question 6",
"type": "Agree",
"value": 9,
"percentage": 56.3,
"percentage_start": 6.3,
"percentage_end": 62.5
},
{
"question": "Question 6",
"type": "Strongly agree",
"value": 3,
"percentage": 18.8,
"percentage_start": 62.5,
"percentage_end": 81.3
},
{
"question": "Question 7",
"type": "Strongly disagree",
"value": 0,
"percentage": 0,
"percentage_start": -10,
"percentage_end": -10
},
{
"question": "Question 7",
"type": "Disagree",
"value": 0,
"percentage": 0,
"percentage_start": -10,
"percentage_end": -10
},
{
"question": "Question 7",
"type": "Neither agree nor disagree",
"value": 1,
"percentage": 20,
"percentage_start": -10,
"percentage_end": 10
},
{
"question": "Question 7",
"type": "Agree",
"value": 4,
"percentage": 80,
"percentage_start": 10,
"percentage_end": 90
},
{
"question": "Question 7",
"type": "Strongly agree",
"value": 0,
"percentage": 0,
"percentage_start": 90,
"percentage_end": 90
},
{
"question": "Question 8",
"type": "Strongly disagree",
"value": 0,
"percentage": 0,
"percentage_start": 0,
"percentage_end": 0
},
{
"question": "Question 8",
"type": "Disagree",
"value": 0,
"percentage": 0,
"percentage_start": 0,
"percentage_end": 0
},
{
"question": "Question 8",
"type": "Neither agree nor disagree",
"value": 0,
"percentage": 0,
"percentage_start": 0,
"percentage_end": 0
},
{
"question": "Question 8",
"type": "Agree",
"value": 0,
"percentage": 0,
"percentage_start": 0,
"percentage_end": 0
},
{
"question": "Question 8",
"type": "Strongly agree",
"value": 2,
"percentage": 100,
"percentage_start": 0,
"percentage_end": 100
}
]
color_scale = alt.Scale(
domain=["Strongly disagree",
"Disagree",
"Neither agree nor disagree",
"Agree",
"Strongly agree"],
range=["#c30d24", "#f3a583", "#cccccc", "#94c6da", "#1770ab"]
)
y_axis = alt.Axis(title='Question',
offset=5,
ticks=False,
minExtent=60,
domain=False)
source = alt.pd.DataFrame(data)
alt.Chart(source).mark_bar().encode(
x='percentage_start:Q',
x2='percentage_end:Q',
y=alt.Y('question:N', axis=y_axis),
color=alt.Color(
'type:N',
legend=alt.Legend( title='Response'),
scale=color_scale,
)
)
###Output
_____no_output_____ |
project-tv-script-generation/dlnd_tv_script_generation.ipynb | ###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
#text = open(data_dir, encoding='utf-8').read()
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
#import unicodedata
#import string
#import re
from collections import Counter
import torch.nn.functional as F
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
#normalized_words = [normalizeString(s) for s in text]
#word_count = Counter(normalized_words)
word_count = Counter(text)
sorted_vocab = sorted(word_count, key=word_count.get, reverse=True)
int_to_vocab = {word : i for word ,i in enumerate(sorted_vocab, 0)}
vocab_to_int = { word : i for i, word in int_to_vocab.items()}
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
punct_dict = {
'.' : '§§POINT§§',
',' : '§§COMMA§§',
'"' : '§§DOUBLEQUOTES§§',
';' : '§§SEMICOLON§§',
'!' : '§§EXCLAMATIONMARK§§',
'?' : '§§QUESTIONMARK§§',
'(' : '§§LEFTPARENTHESES§§',
')' : '§§RIGHTPARENTHESES§§',
'-' : '§§DASH§§',
'\n' : '§§RETURN§§'
}
return punct_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
feature_tensors = [[words[index] for index in range(idx, idx + sequence_length)] for idx in range(len(words)) if (idx + sequence_length) <= (len(words) - 1)]
target_tensors = [target for target in words[sequence_length:]]
# creat tensors out of the last lists
feature_tensors = torch.tensor(feature_tensors)
target_tensors = torch.tensor(target_tensors)
# create a TensorDataset object ...
tensordataset = TensorDataset(feature_tensors, target_tensors)
# create the data loader ....
dataloader = DataLoader(tensordataset, batch_size=batch_size, shuffle=True)
# return a dataloader
return dataloader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 31, 32, 33, 34, 35],
[ 32, 33, 34, 35, 36],
[ 43, 44, 45, 46, 47],
[ 12, 13, 14, 15, 16],
[ 39, 40, 41, 42, 43],
[ 33, 34, 35, 36, 37],
[ 27, 28, 29, 30, 31],
[ 41, 42, 43, 44, 45],
[ 9, 10, 11, 12, 13],
[ 36, 37, 38, 39, 40]])
torch.Size([10])
tensor([ 36, 37, 48, 17, 44, 38, 32, 46, 14, 41])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
import numpy as np
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.n_layers = n_layers
self.hidden_dim = hidden_dim
self.output_size = output_size
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, batch_first=True, dropout=dropout)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
embedded = self.embedding(nn_input)
lstm_output, new_hidden = self.lstm(embedded, hidden)
lstm_output_reshape = lstm_output.contiguous().view(-1, self.hidden_dim)
fc_output = self.fc(lstm_output_reshape)
f_softmax_out = F.log_softmax(fc_output, dim=1)
output_in_batches = f_softmax_out.view(self.batch_size, -1)
final_output = output_in_batches[:, -self.output_size:]
#print("self.outputsize: ", self.output_size)
#print("embedded shape: ", embedded.shape)
#print("lstm_output shape: ", lstm_output.shape)
#print("lstm_output_reshape shape: ", lstm_output_reshape.shape)
#print("drop_fc shape: ", drop_fc.shape)
#print("f_softmax_out shape: ", f_softmax_out.shape)
#print("output_in_batches shape: ", output_in_batches.shape)
#print("final_output shape: ", final_output.shape)
#print("final_output: ", final_output)
# return one batch of output word scores and the hidden state
return final_output, new_hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# save size of batch_size
self.batch_size = batch_size
# initialize hidden state with zero weights, and move to GPU if available
# Implement function
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers,batch_size,self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers,batch_size,self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers,batch_size,self.hidden_dim).zero_(),
weight.new(self.n_layers,batch_size,self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
#arhe
sequence_length=33
batch_size=5
model = RNN(len(vocab_to_int), output_size=10, embedding_dim=300, hidden_dim=256, n_layers=2, dropout=0.5)
#arhe
h = model.init_hidden(batch_size)
#arhe
t_loader = batch_data(test_text, sequence_length, batch_size=batch_size)
data_iter = iter(t_loader)
#arhe
inputs, target = data_iter.next()
print("inputs shape: ", inputs.shape)
print("target shape: ", target.shape)
#arhe
if train_on_gpu:
inputs = inputs.cuda()
model = model.cuda()
output, hidden = model(inputs, h)
###Output
_____no_output_____
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
clip = 1
# move data to GPU, if available
if (train_on_gpu):
inp = inp.cuda()
target = target.cuda()
# perform backpropagation and optimization
h = tuple([each.data for each in hidden])
# zero accumulated gradients
rnn.zero_grad()
# get output and new hidden state from the model
output, hidden = rnn(inp, h)
# calculate loss
loss = criterion(output.squeeze(), target)
# I do now backpropagation
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(rnn.parameters(), clip)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 15# of words in a sequence
# Batch Size
batch_size = 200
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = len(vocab_to_int)
# Embedding Dimension
embedding_dim = 300
# Hidden Dimension
hidden_dim = 280
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
from workspace_utils import active_session
with active_session():
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 5.465411370277405
Epoch: 1/10 Loss: 4.747380018234253
Epoch: 1/10 Loss: 4.523646339416504
Epoch: 1/10 Loss: 4.410965185165406
Epoch: 1/10 Loss: 4.332641193389892
Epoch: 1/10 Loss: 4.250781437397003
Epoch: 1/10 Loss: 4.22547828245163
Epoch: 1/10 Loss: 4.1736126127243045
Epoch: 2/10 Loss: 4.065602141122024
Epoch: 2/10 Loss: 3.9713831934928896
Epoch: 2/10 Loss: 3.9646511087417604
Epoch: 2/10 Loss: 3.9443734703063966
Epoch: 2/10 Loss: 3.9253846549987794
Epoch: 2/10 Loss: 3.910103404521942
Epoch: 2/10 Loss: 3.8933036794662477
Epoch: 2/10 Loss: 3.9087741861343384
Epoch: 3/10 Loss: 3.8143211881319683
Epoch: 3/10 Loss: 3.7305254492759703
Epoch: 3/10 Loss: 3.729267876148224
Epoch: 3/10 Loss: 3.7531130471229552
Epoch: 3/10 Loss: 3.7433264141082763
Epoch: 3/10 Loss: 3.7331958508491514
Epoch: 3/10 Loss: 3.727801914215088
Epoch: 3/10 Loss: 3.731751173019409
Epoch: 4/10 Loss: 3.6587215259671213
Epoch: 4/10 Loss: 3.595583642959595
Epoch: 4/10 Loss: 3.5899739193916322
Epoch: 4/10 Loss: 3.6007260189056396
Epoch: 4/10 Loss: 3.5976479263305663
Epoch: 4/10 Loss: 3.6087461557388307
Epoch: 4/10 Loss: 3.609230924129486
Epoch: 4/10 Loss: 3.606380813121796
Epoch: 5/10 Loss: 3.544965221484502
Epoch: 5/10 Loss: 3.477003323554993
Epoch: 5/10 Loss: 3.5038782482147215
Epoch: 5/10 Loss: 3.4993654556274416
Epoch: 5/10 Loss: 3.4956233019828797
Epoch: 5/10 Loss: 3.5124139609336855
Epoch: 5/10 Loss: 3.505984607219696
Epoch: 5/10 Loss: 3.5327468881607054
Epoch: 6/10 Loss: 3.4462271121641
Epoch: 6/10 Loss: 3.389488977909088
Epoch: 6/10 Loss: 3.4160489406585692
Epoch: 6/10 Loss: 3.4161289353370665
Epoch: 6/10 Loss: 3.423075870513916
Epoch: 6/10 Loss: 3.4306877388954162
Epoch: 6/10 Loss: 3.44356072473526
Epoch: 6/10 Loss: 3.452195174694061
Epoch: 7/10 Loss: 3.3693260550498962
Epoch: 7/10 Loss: 3.340730875968933
Epoch: 7/10 Loss: 3.3286486849784853
Epoch: 7/10 Loss: 3.3355954723358154
Epoch: 7/10 Loss: 3.350142655849457
Epoch: 7/10 Loss: 3.360555097579956
Epoch: 7/10 Loss: 3.3787125854492186
Epoch: 7/10 Loss: 3.3902198257446288
Epoch: 8/10 Loss: 3.318429421633482
Epoch: 8/10 Loss: 3.259853980541229
Epoch: 8/10 Loss: 3.2851018786430357
Epoch: 8/10 Loss: 3.275691987037659
Epoch: 8/10 Loss: 3.3088065972328184
Epoch: 8/10 Loss: 3.311209179878235
Epoch: 8/10 Loss: 3.3378564591407778
Epoch: 8/10 Loss: 3.3370056500434875
Epoch: 9/10 Loss: 3.2620486368735633
Epoch: 9/10 Loss: 3.224260495185852
Epoch: 9/10 Loss: 3.2384872851371767
Epoch: 9/10 Loss: 3.2424585890769957
Epoch: 9/10 Loss: 3.2472597703933714
Epoch: 9/10 Loss: 3.274127863407135
Epoch: 9/10 Loss: 3.2769912271499635
Epoch: 9/10 Loss: 3.2860502099990843
Epoch: 10/10 Loss: 3.2116867817938326
Epoch: 10/10 Loss: 3.173737900733948
Epoch: 10/10 Loss: 3.1908502650260924
Epoch: 10/10 Loss: 3.1961592049598693
Epoch: 10/10 Loss: 3.2073359265327452
Epoch: 10/10 Loss: 3.2222299757003783
Epoch: 10/10 Loss: 3.2426239256858826
Epoch: 10/10 Loss: 3.2621713438034057
Model Trained and Saved
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:**I tried different values of everything (including different embedding dimensions). I noticed that the sequence length can influence a lot the final loss gotten during training. But what I think made a good training here, was a combination between the learning rate together with the clipping of the gradients during training. As for the hidden dimensions and the number of layers, my intuition - after trying different numbers- is that any value between 256 <= hidden < 1000 and 2 <= n_layers <= 3 is going to (eventually) get the expected loss values as long as there are enough epochs to train (and the number of layers remain between 2 and 3). --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:36: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
vocab_to_int = {}
int_to_vocab = {}
unique_words = set(text)
for i, word in enumerate(unique_words):
vocab_to_int[word] = i
int_to_vocab[i] = word
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
punctuation_tokens = {
'.': '||period||',
',': '||comma||',
'"': '||doublequote||',
';': '||semicolon||',
'!': '||bang||',
'?': '||questionmark||',
'(': '||leftparen||',
')': '||rightparen||',
'-': '||dash||',
'\n': '||newline||',
}
return punctuation_tokens
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
train_set_size = int(len(int_text) * 0.8)
train_set = int_text[:train_set_size]
val_set = int_text[train_set_size:]
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
text_len = len(words)
num_batches = text_len // (sequence_length * batch_size)
# Trim text to be a multiple batch size
trimmed_text_len = num_batches * batch_size * sequence_length
words = list(words[:trimmed_text_len])
if trimmed_text_len == 0:
raise Exception(
"Text length({}) less than sequence length({}) times batch size({})".format(
text_len,
sequence_length,
batch_size
))
# append a newline to the text so we don't get an
# IndexError when we traverse
words.append(vocab_to_int['||newline||'])
inputs = []
labels = []
for i in range(0, trimmed_text_len - sequence_length + 1):
x = words[i:i + sequence_length]
y = words[i + sequence_length]
inputs.append(x)
labels.append(y)
input_tensor = torch.from_numpy(np.array(inputs))
output_tensor = torch.from_numpy(np.array(labels))
tensor_dataset = TensorDataset(input_tensor, output_tensor)
# return a dataloader
return DataLoader(tensor_dataset, batch_size=batch_size, drop_last=True, shuffle=True)
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
num_test_runs = 10
for run in range(num_test_runs):
test_text_len = np.random.randint(low=50, high=1000)
test_text = range(test_text_len)
test_batch_size = np.random.randint(low=1, high=int(np.sqrt(test_text_len)))
test_seq_len = np.random.randint(low=1, high=int(np.sqrt(test_text_len)))
print("Testing with text of len: {}, seq len: {}, batch size: {}\n".format(test_text_len, test_seq_len, test_batch_size))
t_loader = batch_data(test_text, sequence_length=test_seq_len, batch_size=test_batch_size)
for i, d in enumerate(t_loader, 1):
x = d[0]
y = d[1]
assert(x.shape[0] == test_batch_size), "Expected batch size {} got batch size {}".format(test_batch_size, x.shape[0])
assert(x.shape[1] == test_seq_len), "Expected seq len {} got seq len {}".format(test_batch_size, x.shape[0])
assert(y.shape[0] == test_batch_size), "Expected batch size {} got batch size {}".format(test_batch_size, x.shape[0])
print("All tests passed")
###Output
Testing with text of len: 63, seq len: 1, batch size: 3
Testing with text of len: 465, seq len: 18, batch size: 17
Testing with text of len: 463, seq len: 16, batch size: 1
Testing with text of len: 896, seq len: 1, batch size: 9
Testing with text of len: 695, seq len: 5, batch size: 4
Testing with text of len: 904, seq len: 26, batch size: 15
Testing with text of len: 579, seq len: 19, batch size: 1
Testing with text of len: 467, seq len: 3, batch size: 20
Testing with text of len: 291, seq len: 11, batch size: 13
Testing with text of len: 913, seq len: 27, batch size: 13
All tests passed
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# set class variables
self.vocab_size = vocab_size
self.output_size = output_size
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.n_layers = n_layers
self.dropout = dropout
# define model layers
self.embedding = nn.Embedding(self.vocab_size, self.embedding_dim)
self.embedding.weight.data.normal_(0, 1.0 / np.sqrt(self.vocab_size))
self.gru = nn.GRU(
input_size=self.embedding_dim,
hidden_size=self.hidden_dim,
num_layers=self.n_layers,
batch_first=True,
dropout=self.dropout
)
self.fc = nn.Linear(self.hidden_dim, self.output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
batch_size = nn_input.shape[0]
seq_len = nn_input.shape[1]
# return one batch of output word scores and the hidden state
embedded_input = self.embedding(nn_input)
gru_out, hidden = self.gru(embedded_input, hidden)
#reshape to 2d tensor
gru_out = gru_out.contiguous().view(-1, self.hidden_dim)
out = self.fc(gru_out)
out = out.view(batch_size, seq_len, self.output_size)
# grab just the last output
return out[:,-1], hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# initialize hidden state with zero weights, and move to GPU if available
h0 = torch.zeros(self.n_layers, batch_size, self.hidden_dim)
if train_on_gpu:
h0 = h0.cuda()
return h0
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
optimizer.zero_grad()
# detach hidden state history
hidden = hidden.data
# Move to GPU if needed
if train_on_gpu:
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
out, hidden = rnn(inp, hidden)
loss = criterion(out, target)
loss_val = loss.item()
loss.backward()
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss_val, hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
min_val_loss = np.Inf
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
train_loss = np.average(batch_losses)
batch_losses = []
rnn.eval()
val_loss_acc = 0.0
val_hidden = rnn.init_hidden(batch_size)
for val_batch_i, (val_inputs, val_labels) in enumerate(val_loader, 1):
val_inputs, val_labels = val_inputs.cuda(), val_labels.cuda()
val_output, val_hidden = rnn(val_inputs, val_hidden)
val_loss = criterion(val_output, val_labels)
val_loss_acc += val_loss.item()
del val_inputs
del val_labels
del val_loss
mean_val_loss = val_loss_acc / val_batch_i
print('Epoch: {:>4}/{:<4} Train loss: {}, val loss: {}\n'.format(
epoch_i, n_epochs, train_loss, mean_val_loss))
# if mean_val_loss < min_val_loss:
# min_val_loss = mean_val_loss
# print("Validation loss less than previous seen minimum. Saving model...")
# helper.save_model('./save/trained_rnn', rnn)
rnn.train()
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(train_set, sequence_length, batch_size)
val_loader = batch_data(val_set, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 20
# Learning Rate
learning_rate = 0.0001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = len(vocab_to_int)
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 500
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 20 epoch(s)...
Epoch: 1/20 Train loss: 6.089443292617798, val loss: 5.888545632276542
Epoch: 1/20 Train loss: 5.38032785320282, val loss: 5.229292028837706
Epoch: 1/20 Train loss: 5.001903301239014, val loss: 5.066568021382936
Epoch: 1/20 Train loss: 4.8311947569847105, val loss: 4.965543107045821
Epoch: 1/20 Train loss: 4.729884071826935, val loss: 4.8825112027689395
Epoch: 1/20 Train loss: 4.647920533180237, val loss: 4.824099782561981
Epoch: 1/20 Train loss: 4.609042806148529, val loss: 4.772486563291886
Epoch: 1/20 Train loss: 4.533252994060517, val loss: 4.745904328927795
Epoch: 1/20 Train loss: 4.502012115001678, val loss: 4.712778917227485
Epoch: 1/20 Train loss: 4.429528242111206, val loss: 4.688688200613506
Epoch: 1/20 Train loss: 4.425151922702789, val loss: 4.673239785703161
Epoch: 2/20 Train loss: 4.32508260294507, val loss: 4.681823958760118
Epoch: 2/20 Train loss: 4.302997501373291, val loss: 4.666792680927109
Epoch: 2/20 Train loss: 4.277871142864227, val loss: 4.671902093379252
Epoch: 2/20 Train loss: 4.275054126739502, val loss: 4.6539782286548546
Epoch: 2/20 Train loss: 4.270262873649597, val loss: 4.649798901326541
Epoch: 2/20 Train loss: 4.24421347618103, val loss: 4.646515612468349
Epoch: 2/20 Train loss: 4.217134667873383, val loss: 4.639997927038853
Epoch: 2/20 Train loss: 4.240438857078552, val loss: 4.640751204960976
Epoch: 2/20 Train loss: 4.249748113632202, val loss: 4.624884529367808
Epoch: 2/20 Train loss: 4.208532666683197, val loss: 4.619073774936475
Epoch: 2/20 Train loss: 4.203614521026611, val loss: 4.615755053713134
Epoch: 3/20 Train loss: 4.08810302955614, val loss: 4.626964970060696
Epoch: 3/20 Train loss: 4.100640897274017, val loss: 4.628881762575296
Epoch: 3/20 Train loss: 4.097302923202514, val loss: 4.626060246048083
Epoch: 3/20 Train loss: 4.094220775604248, val loss: 4.625499844980205
Epoch: 3/20 Train loss: 4.088377061843872, val loss: 4.618856930063301
Epoch: 3/20 Train loss: 4.086565406322479, val loss: 4.621455580832891
Epoch: 3/20 Train loss: 4.075271434307099, val loss: 4.617256336026779
Epoch: 3/20 Train loss: 4.0792808923721315, val loss: 4.608320056662618
Epoch: 3/20 Train loss: 4.06850001001358, val loss: 4.605131064498056
Epoch: 3/20 Train loss: 4.073314413547516, val loss: 4.598010234647385
Epoch: 3/20 Train loss: 4.045189538955689, val loss: 4.597710606346378
Epoch: 4/20 Train loss: 3.951297586538251, val loss: 4.596722706519416
Epoch: 4/20 Train loss: 3.9634818296432495, val loss: 4.591242590595956
Epoch: 4/20 Train loss: 3.9785354022979735, val loss: 4.595947196965942
Epoch: 4/20 Train loss: 3.9646830744743347, val loss: 4.589471628032544
Epoch: 4/20 Train loss: 3.9598280696868895, val loss: 4.580066362501821
Epoch: 4/20 Train loss: 3.9620407814979552, val loss: 4.577522607910453
Epoch: 4/20 Train loss: 3.957287353992462, val loss: 4.579697091945047
Epoch: 4/20 Train loss: 3.960713074684143, val loss: 4.565049192388766
Epoch: 4/20 Train loss: 3.9513324022293093, val loss: 4.55822671463886
Epoch: 4/20 Train loss: 3.9562961134910584, val loss: 4.563128009642346
Epoch: 4/20 Train loss: 3.946153395175934, val loss: 4.553070252537299
Epoch: 5/20 Train loss: 3.8713922320434624, val loss: 4.55371470379263
Epoch: 5/20 Train loss: 3.835891191005707, val loss: 4.554215290119014
Epoch: 5/20 Train loss: 3.8666149320602416, val loss: 4.557163587828349
Epoch: 5/20 Train loss: 3.8644120378494264, val loss: 4.5504185377564506
Epoch: 5/20 Train loss: 3.838248219013214, val loss: 4.554908227542907
Epoch: 5/20 Train loss: 3.881598852157593, val loss: 4.549014977193548
Epoch: 5/20 Train loss: 3.8765457849502565, val loss: 4.547274337216536
Epoch: 5/20 Train loss: 3.8755537700653075, val loss: 4.536517882364271
Epoch: 5/20 Train loss: 3.875738375663757, val loss: 4.533529537199555
Epoch: 5/20 Train loss: 3.870757665634155, val loss: 4.535768347562346
Epoch: 5/20 Train loss: 3.867671820640564, val loss: 4.530416350642755
Epoch: 6/20 Train loss: 3.7497484801314207, val loss: 4.533847979284002
Epoch: 6/20 Train loss: 3.796032989025116, val loss: 4.535632892166478
Epoch: 6/20 Train loss: 3.7660351166725157, val loss: 4.539057220118435
Epoch: 6/20 Train loss: 3.78435298204422, val loss: 4.544943564053145
Epoch: 6/20 Train loss: 3.8019707641601563, val loss: 4.541727619260399
Epoch: 6/20 Train loss: 3.7767844052314756, val loss: 4.538789587110131
Epoch: 6/20 Train loss: 3.787640649318695, val loss: 4.5347009027836735
Epoch: 6/20 Train loss: 3.797083462238312, val loss: 4.530216853167189
Epoch: 6/20 Train loss: 3.796281076431274, val loss: 4.538494392639445
Epoch: 6/20 Train loss: 3.8056915717124937, val loss: 4.533379843974817
Epoch: 6/20 Train loss: 3.7789505429267884, val loss: 4.524802388348452
Epoch: 7/20 Train loss: 3.70558353132439, val loss: 4.54135556320564
Epoch: 7/20 Train loss: 3.6803517718315124, val loss: 4.53808564810207
Epoch: 7/20 Train loss: 3.707981806278229, val loss: 4.53672881260289
Epoch: 7/20 Train loss: 3.727965608119965, val loss: 4.541944288538033
Epoch: 7/20 Train loss: 3.7338137564659117, val loss: 4.5347937710256
Epoch: 7/20 Train loss: 3.7170997552871703, val loss: 4.531428056624915
Epoch: 7/20 Train loss: 3.7191410465240478, val loss: 4.53699447423799
Epoch: 7/20 Train loss: 3.711714801311493, val loss: 4.53272362421364
Epoch: 7/20 Train loss: 3.7127138934135435, val loss: 4.529020150159227
Epoch: 7/20 Train loss: 3.733606447696686, val loss: 4.531387109220929
Epoch: 7/20 Train loss: 3.730205627441406, val loss: 4.524798944581748
Epoch: 8/20 Train loss: 3.6367429027657936, val loss: 4.528649301988946
Epoch: 8/20 Train loss: 3.608758909702301, val loss: 4.536122329635016
Epoch: 8/20 Train loss: 3.636224901199341, val loss: 4.531542666444167
Epoch: 8/20 Train loss: 3.642311806678772, val loss: 4.531848789548771
Epoch: 8/20 Train loss: 3.666326376438141, val loss: 4.52992889261486
Epoch: 8/20 Train loss: 3.6457205715179444, val loss: 4.532586033864361
Epoch: 8/20 Train loss: 3.6608132266998292, val loss: 4.533744542009973
Epoch: 8/20 Train loss: 3.658338613986969, val loss: 4.533045496504628
Epoch: 8/20 Train loss: 3.695858839035034, val loss: 4.522558821286118
Epoch: 8/20 Train loss: 3.684595164299011, val loss: 4.5256659191576505
Epoch: 8/20 Train loss: 3.677954578399658, val loss: 4.515781677052475
Epoch: 9/20 Train loss: 3.559159166574059, val loss: 4.531293803105859
Epoch: 9/20 Train loss: 3.57889263343811, val loss: 4.5268093157878795
Epoch: 9/20 Train loss: 3.569286174297333, val loss: 4.526701747469974
Epoch: 9/20 Train loss: 3.5622908320426943, val loss: 4.527236569949069
Epoch: 9/20 Train loss: 3.594777565956116, val loss: 4.52576502953784
Epoch: 9/20 Train loss: 3.59393940448761, val loss: 4.529714261824661
Epoch: 9/20 Train loss: 3.603114803314209, val loss: 4.526787569576069
Epoch: 9/20 Train loss: 3.6289773931503295, val loss: 4.531483062485982
Epoch: 9/20 Train loss: 3.632167550086975, val loss: 4.523828966485415
Epoch: 9/20 Train loss: 3.6176296267509462, val loss: 4.52546941779173
Epoch: 9/20 Train loss: 3.627548965930939, val loss: 4.522664796542228
Epoch: 10/20 Train loss: 3.514357979561616, val loss: 4.522655114585157
Epoch: 10/20 Train loss: 3.4983480253219605, val loss: 4.530439458535334
Epoch: 10/20 Train loss: 3.541814908981323, val loss: 4.533421656654583
Epoch: 10/20 Train loss: 3.5416244578361513, val loss: 4.530649678495996
Epoch: 10/20 Train loss: 3.5546161065101622, val loss: 4.5293445623204205
Epoch: 10/20 Train loss: 3.5405620584487916, val loss: 4.530408524711669
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** Model hyperparameters I started off with a pretty simple network with two layers. I didn't want to complicate the architecture early on and wanted the simplest model that would work. I chose 2 layers and embedding_dim set to 200. The reason I picked 200 was from the insights in the hyperparameters lecture. It said 200 was a good place to start. I set the hidden_dim to 300, a value between the embedding_dim and the output. Later I increased it to 500 to decrease the loss. This was a bit tricky since these were pretty close to how large I could make them given the memory on my GPU. Training hyperparameters I picked a learning_rate = 0.01 and num_epochs = 40. These were arbitrary choices. However, when I trained this model, it did not converge with the loss oscillating at around 4.7-4.8. This indicated that I needed to decrease the learning rate. I set learning_rate to 0.001 and the loss started decreasing. However, I got the best stability at 0.0001.With these changes, 20 epochs was enough.64 was a batch size I picked arbitrarily. It worked quite well, so I didn't change it. Data hyperparameters The sequence length I picked was 10. The reason for this was that when I plotted a histogram of the sentence lengths, about 65% of sentences were less than 10 words long (once the 0 length sentences were ignored). What I was going for here was that while producing each sentence, the previous sentence was what was used as the sequence. This seemed to work quite well in practice. I couldn't try much larger sequence lengths due to memory constraints, but in general, it seemed that a larger sequence length reduced the loss. Decreasing the sequence length to 5 didn't produce a training loss of < 3.5 in 20 epochs. Validation loss I tried using the validation loss, but could never get it below 3.5. The model would always start overfitting around a loss of 4.5. This was despite a dropout of 0.5. Given this, there were 3 alternatives:1. Reduce the number of features2. Reduce the complexity of the network3. Get more dataI tried reducing the size of the embedding dim (1 above) and the hidden_dim (2 above) but none of these approaches worked. I concluded that to get a lower validation loss, we'd need more data. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/home/rahul/anaconda3/envs/deep-learning/lib/python3.6/site-packages/ipykernel_launcher.py:50: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
_____no_output_____
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
# return tuple
return (None, None)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
_____no_output_____
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
_____no_output_____
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# return a dataloader
return None
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
_____no_output_____
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
# define model layers
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# return one batch of output word scores and the hidden state
return None, None
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
_____no_output_____
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
# perform backpropagation and optimization
# return the loss over a batch and the hidden state produced by our model
return None, None
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
_____no_output_____
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = # of words in a sequence
# Batch Size
batch_size =
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs =
# Learning Rate
learning_rate =
# Model parameters
# Vocab size
vocab_size =
# Output size
output_size =
# Embedding Dimension
embedding_dim =
# Hidden Dimension
hidden_dim =
# Number of RNN Layers
n_layers =
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
_____no_output_____
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
_____no_output_____
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
int_to_vocab = dict(enumerate(set(text)))
vocab_to_int = {}
for (key, value) in int_to_vocab.items():
vocab_to_int[value] = key
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
tokenization_dict = {
'.': 'Period',
',': 'Comma',
'"': 'QuotationMark',
';': 'Semicolon',
'!': 'ExclamationMark',
'?': 'QuestionMark',
'(': 'LeftParentheses',
')': 'RightParentheses',
'-': 'Dash',
'\n': 'Return'
}
return tokenization_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
batch_size_total = batch_size * sequence_length
n_batches = len(words) // batch_size_total
feature_tensor = []
target_tensor = []
for n in range(0, len(words), sequence_length):
print(f' n {n}')
print(f'words {words[n: n + sequence_length]}')
feature_tensor.append(words[n: n + sequence_length])
try:
target_tensor.append(words[n + sequence_length + 1])
except IndexError:
target_tensor.append(words[0])
data = TensorDataset(torch.IntTensor(feature_tensor), torch.IntTensor(target_tensor))
dataloader = DataLoader(data, batch_size=batch_size)
# TODO: Implement function
# return a dataloader
return dataloader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
n 0
words range(0, 5)
n 5
words range(5, 10)
n 10
words range(10, 15)
n 15
words range(15, 20)
n 20
words range(20, 25)
n 25
words range(25, 30)
n 30
words range(30, 35)
n 35
words range(35, 40)
n 40
words range(40, 45)
n 45
words range(45, 50)
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24],
[25, 26, 27, 28, 29],
[30, 31, 32, 33, 34],
[35, 36, 37, 38, 39],
[40, 41, 42, 43, 44],
[45, 46, 47, 48, 49]], dtype=torch.int32)
torch.Size([10])
tensor([ 6, 11, 16, 21, 26, 31, 36, 41, 46, 0], dtype=torch.int32)
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
self.vocab_size = vocab_size
self.output_size = output_size
self.embedding_dim = embedding_dim
self.hidden_him = hidden_dim
self.dropout = dropout
# set class variables
# define model layers
self.lstm = nn.LSTM(self.vocab_size, self.hidden_dim, self.embedding_dim, dropout = self.dropout, batch_first=True)
self.dropout = nn.Dropout(self.dropout)
self.fc = nn.Linear(self.hidden_dim, self.ouput_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
r_output, hidden = self.lstm(nn_input, hidden)
output = self.dropout(r_output)
output = output.reshape((self.hidden_dim, -1))
output = self.fc(output)
# return one batch of output word scores and the hidden state
return output, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
_____no_output_____
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
# perform backpropagation and optimization
# return the loss over a batch and the hidden state produced by our model
return None, None
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
_____no_output_____
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = # of words in a sequence
# Batch Size
batch_size =
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs =
# Learning Rate
learning_rate =
# Model parameters
# Vocab size
vocab_size =
# Output size
output_size =
# Embedding Dimension
embedding_dim =
# Hidden Dimension
hidden_dim =
# Number of RNN Layers
n_layers =
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
_____no_output_____
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
_____no_output_____
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
_____no_output_____
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
# return tuple
return (None, None)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
_____no_output_____
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
_____no_output_____
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word":```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# return a dataloader
return None
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of words** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output one, next word.
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
# define model layers
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# return one batch of output word scores and the hidden state
return None, None
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
_____no_output_____
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
# perform backpropagation and optimization
# return the loss over a batch and the hidden state produced by our model
return None, None
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
_____no_output_____
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = # of words in a sequence
# Batch Size
batch_size =
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs =
# Learning Rate
learning_rate =
# Model parameters
# Vocab size
vocab_size =
# Output size
output_size =
# Embedding Dimension
embedding_dim =
# Hidden Dimension
hidden_dim =
# Number of RNN Layers
n_layers =
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
_____no_output_____
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat it's predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval() # eval mode
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the index of the most likely next word
top_i = torch.multinomial(output.exp().data, 1).item()
# retrieve that word from the dictionary
word = int_to_vocab[top_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = top_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
_____no_output_____
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
from collections import Counter
counts = Counter(text)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: i for i, word in enumerate(vocab)}
int_to_vocab = {i: word for word, i in vocab_to_int.items()}
# return tuple
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
# MICAH Why is this needed...? Pretty sure it would work to just surround
# the tokens with spaces
return {char: f"<{ord(char)}>" for char in ".,\";!?()-\n"}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
num_batches = (len(words)-1) // (sequence_length * batch_size)
keep = num_batches*sequence_length*batch_size
print(f"Discarding last {len(words) - keep} words")
features = torch.tensor(words[:keep]).view(batch_size, num_batches, sequence_length).transpose(0,1).transpose(1,2)
targets = torch.tensor(words[1:keep+1]).view(batch_size, num_batches, sequence_length).transpose(0,1).transpose(1,2)
return [*zip(features, targets[:,-1])]
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
for x, y in batch_data([*range(100)], 3, 10):
print(x)
# print(y)
###Output
Discarding last 10 words
tensor([[ 0, 9, 18, 27, 36, 45, 54, 63, 72, 81],
[ 1, 10, 19, 28, 37, 46, 55, 64, 73, 82],
[ 2, 11, 20, 29, 38, 47, 56, 65, 74, 83]])
tensor([[ 3, 12, 21, 30, 39, 48, 57, 66, 75, 84],
[ 4, 13, 22, 31, 40, 49, 58, 67, 76, 85],
[ 5, 14, 23, 32, 41, 50, 59, 68, 77, 86]])
tensor([[ 6, 15, 24, 33, 42, 51, 60, 69, 78, 87],
[ 7, 16, 25, 34, 43, 52, 61, 70, 79, 88],
[ 8, 17, 26, 35, 44, 53, 62, 71, 80, 89]])
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=9)
data_iter = iter(t_loader)
sample_x, sample_y = next(data_iter)
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
Discarding last 5 words
torch.Size([5, 9])
tensor([[ 0, 5, 10, 15, 20, 25, 30, 35, 40],
[ 1, 6, 11, 16, 21, 26, 31, 36, 41],
[ 2, 7, 12, 17, 22, 27, 32, 37, 42],
[ 3, 8, 13, 18, 23, 28, 33, 38, 43],
[ 4, 9, 14, 19, 24, 29, 34, 39, 44]])
torch.Size([9])
tensor([ 5, 10, 15, 20, 25, 30, 35, 40, 45])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super().__init__()
# TODO: Implement function
self.embed = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,
dropout=dropout)
self.fc = nn.Linear(hidden_dim, output_size)
self.dropout = nn.Dropout(dropout)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
x = self.embed(nn_input.long())
x, hidden = self.lstm(x, hidden)
x = x[-1,:,:] # Only keep last sequence item output
x = self.dropout(x)
x = self.fc(x)
# return one batch of output word scores and the hidden state
return x, hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# tests.test_rnn(RNN, train_on_gpu)
###Output
_____no_output_____
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden, train=True):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if train_on_gpu:
inp = inp.cuda()
target = target.cuda()
if hidden:
hidden = tuple(h.detach() for h in hidden)
# perform backpropagation and optimization
output, hidden = rnn(inp, hidden)
loss = criterion(output, target.squeeze())
if train:
rnn.zero_grad()
loss.backward()
# TODO how do I know if this is needed or what to use for it?
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
_____no_output_____
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = None
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
# n_batches = len(train_loader.dataset)//batch_size
# if(batch_i > n_batches):
# break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 16 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 30
# Learning Rate
learning_rate = .0003
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 512
# Hidden Dimension
hidden_dim = 1024
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 100
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 30 epoch(s)...
Epoch: 1/30 Loss: 6.320746541023254
Epoch: 1/30 Loss: 5.348134903907776
Epoch: 1/30 Loss: 5.1833203458786015
Epoch: 1/30 Loss: 5.0734469509124756
Epoch: 2/30 Loss: 4.881154561925817
Epoch: 2/30 Loss: 4.745476379394531
Epoch: 2/30 Loss: 4.7238505172729495
Epoch: 2/30 Loss: 4.669734661579132
Epoch: 3/30 Loss: 4.557466406292385
Epoch: 3/30 Loss: 4.445793311595917
Epoch: 3/30 Loss: 4.4521932744979855
Epoch: 3/30 Loss: 4.402828514575958
Epoch: 4/30 Loss: 4.307625784697356
Epoch: 4/30 Loss: 4.224611687660217
Epoch: 4/30 Loss: 4.217877686023712
Epoch: 4/30 Loss: 4.169713020324707
Epoch: 5/30 Loss: 4.081824730060719
Epoch: 5/30 Loss: 3.999491412639618
Epoch: 5/30 Loss: 3.9962826681137087
Epoch: 5/30 Loss: 3.9522921347618105
Epoch: 6/30 Loss: 3.8648497828730832
Epoch: 6/30 Loss: 3.7627077984809874
Epoch: 6/30 Loss: 3.7621408891677857
Epoch: 6/30 Loss: 3.711037130355835
Epoch: 7/30 Loss: 3.615943177541097
Epoch: 7/30 Loss: 3.5138127779960633
Epoch: 7/30 Loss: 3.4889926314353943
Epoch: 7/30 Loss: 3.448523542881012
Epoch: 8/30 Loss: 3.356293644728484
Epoch: 8/30 Loss: 3.249914195537567
Epoch: 8/30 Loss: 3.2156256890296935
Epoch: 8/30 Loss: 3.163530719280243
Epoch: 9/30 Loss: 3.0722286383310955
Epoch: 9/30 Loss: 2.9782372379302977
Epoch: 9/30 Loss: 2.9401678109169005
Epoch: 9/30 Loss: 2.8860844707489015
Epoch: 10/30 Loss: 2.8070294662758157
Epoch: 10/30 Loss: 2.704846746921539
Epoch: 10/30 Loss: 2.655916314125061
Epoch: 10/30 Loss: 2.6334973192214965
Epoch: 11/30 Loss: 2.5527074231041804
Epoch: 11/30 Loss: 2.440101854801178
Epoch: 11/30 Loss: 2.4132673108577727
Epoch: 11/30 Loss: 2.384680417776108
Epoch: 12/30 Loss: 2.2861942379562943
Epoch: 12/30 Loss: 2.1899501037597657
Epoch: 12/30 Loss: 2.1658310878276823
Epoch: 12/30 Loss: 2.120043692588806
Epoch: 13/30 Loss: 2.0541996876398723
Epoch: 13/30 Loss: 1.9452829146385193
Epoch: 13/30 Loss: 1.929740288257599
Epoch: 13/30 Loss: 1.8880230963230134
Epoch: 14/30 Loss: 1.8238688910448992
Epoch: 14/30 Loss: 1.7239552199840547
Epoch: 14/30 Loss: 1.7112397277355194
Epoch: 14/30 Loss: 1.6673466289043426
Epoch: 15/30 Loss: 1.6068431509865655
Epoch: 15/30 Loss: 1.5391413950920105
Epoch: 15/30 Loss: 1.5125179016590118
Epoch: 15/30 Loss: 1.478696836233139
Epoch: 16/30 Loss: 1.411489728645042
Epoch: 16/30 Loss: 1.350627772808075
Epoch: 16/30 Loss: 1.3385493898391723
Epoch: 16/30 Loss: 1.2946833491325378
Epoch: 17/30 Loss: 1.243518133516665
Epoch: 17/30 Loss: 1.1780248993635178
Epoch: 17/30 Loss: 1.1670920872688293
Epoch: 17/30 Loss: 1.1172199761867523
Epoch: 18/30 Loss: 1.0740072524106061
Epoch: 18/30 Loss: 1.02829485476017
Epoch: 18/30 Loss: 1.0020198231935502
Epoch: 18/30 Loss: 0.9775135004520417
Epoch: 19/30 Loss: 0.9251667852754946
Epoch: 19/30 Loss: 0.8898401129245758
Epoch: 19/30 Loss: 0.8612530297040939
Epoch: 19/30 Loss: 0.8250289970636367
Epoch: 20/30 Loss: 0.7899847441249424
Epoch: 20/30 Loss: 0.7448580867052078
Epoch: 20/30 Loss: 0.7287216418981552
Epoch: 20/30 Loss: 0.7126620370149612
Epoch: 21/30 Loss: 0.6679433670308855
Epoch: 21/30 Loss: 0.6317534220218658
Epoch: 21/30 Loss: 0.615033273100853
Epoch: 21/30 Loss: 0.598382982313633
Epoch: 22/30 Loss: 0.5677967638881118
Epoch: 22/30 Loss: 0.5382282266020775
Epoch: 22/30 Loss: 0.5280096444487572
Epoch: 22/30 Loss: 0.5113486337661743
Epoch: 23/30 Loss: 0.479440450447577
Epoch: 23/30 Loss: 0.44737132877111435
Epoch: 23/30 Loss: 0.44609891444444655
Epoch: 23/30 Loss: 0.4355119559168816
Epoch: 24/30 Loss: 0.40146029348726625
Epoch: 24/30 Loss: 0.37681356638669966
Epoch: 24/30 Loss: 0.37139121025800703
Epoch: 24/30 Loss: 0.3626461037993431
Epoch: 25/30 Loss: 0.33981207267001823
Epoch: 25/30 Loss: 0.31446532666683197
Epoch: 25/30 Loss: 0.3177489612996578
Epoch: 25/30 Loss: 0.30868755400180814
Epoch: 26/30 Loss: 0.281929725518933
Epoch: 26/30 Loss: 0.2624028177559376
Epoch: 26/30 Loss: 0.2682495655119419
Epoch: 26/30 Loss: 0.267210082411766
Epoch: 27/30 Loss: 0.24271566613956733
Epoch: 27/30 Loss: 0.22970589518547058
Epoch: 27/30 Loss: 0.22710575625300408
Epoch: 27/30 Loss: 0.22650425739586352
Epoch: 28/30 Loss: 0.20698450522290335
Epoch: 28/30 Loss: 0.1980937472730875
Epoch: 28/30 Loss: 0.19913665264844893
Epoch: 28/30 Loss: 0.20187171049416064
Epoch: 29/30 Loss: 0.18485633245220892
Epoch: 29/30 Loss: 0.17600062713027
Epoch: 29/30 Loss: 0.1665116700530052
Epoch: 29/30 Loss: 0.17081114858388902
Epoch: 30/30 Loss: 0.16158955533195424
Epoch: 30/30 Loss: 0.161728732958436
Epoch: 30/30 Loss: 0.15483054615557193
Epoch: 30/30 Loss: 0.15256089434027673
Model Trained and Saved
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** I started with the same parameters that were used in Character_Level_RNN_Solution and experimented from there.I wasn't getting < 3.5 loss so I tried lower and higher embedding_dim, hidden_dim, sequence_length values. Raising the values gave better results.Using a high sequence_length > 256 seemed to slow the training down a lot. I believe it was spending a lot of time backpropating through the history. I tried torch.utils.bottleneck and I think it was saying that all the time was spent in the loss.backwards call. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, 1), prime_id)
predicted = [int_to_vocab[prime_id]]
# initialize the hidden state
hidden = None
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# get the output of the rnn
output, hidden = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq[-1][-1] = torch.LongTensor([word_i])
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'kramer:' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
kramer: i'm sorry.
morty: jerry!
kramer: this is the best points. i said he was going to tell ya, but i'm going to get your hand.
mike:(hands back back to sleep) hey... so i have to have to tell you something, but you find the other thing with you or
jerry: no, it's tarragon, it's not my family.
elaine: uhh! youre out! ill be the back!
kramer: look, i'm gonna tell me; where going to get up out, can i get all these apartment here again.(wilhelm)(sits) i'm sorry.(picks up) this sorry, i'm gonna be right right right now.(kramer and george as then leaves) hey, where is she for me on the flight?
george:(pointing) kramer, elaine, go.
george:(frustrated) ohh...(listens) oh, yeah. yeah. yeah.(tapping) yeah. yeah.(listens) oh, yeah. yeah. yeah. yeah.(listens) what? oh, what happened, i'll tell you you. i'll tell you; i'm going to make tell you what you did.
kramer: well, i need the same of money with me or they want to see me.
george: well, i was just an same- it's part over.
elaine: this is not an hard of eight isn't. that's an artist than soon again on any than?
morty: that's fine.
jerry: so youre just a good idea?
george:(worked up) there come all nice sorry.(to susan) : what is that?
george: that's exactly! i'm cooking a huge- hm.(sits) puke. oh, yeah. i got to tell you what i said, i think you should will be something.
mrs(hands away) how you have to ask i'm going to play, play----
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
counter = Counter(text)
sorted_vocab = sorted(counter, key=counter.get, reverse=True)
vocab_to_int = { w:i for i, w in enumerate(sorted_vocab) }
int_to_vocab = { i: w for w, i in vocab_to_int.items() }
# return tuple
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return {
'.': '||period||',
',': '||comma||',
'"': '||quotationmark||',
';': '||semicolon||',
'!': '||exclamation_mark||',
'?': '||question_mark||',
'(': '||left_parentheses||',
')': '||right_parentheses||',
'-': '||dash||',
'\n': '||return||'
}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
n_batches = len(words) // batch_size
words = words[:n_batches*batch_size]
features, targets = [], []
for idx in range(0, (len(words) - sequence_length)):
features.append(words[idx : idx+sequence_length])
targets.append(words[idx + sequence_length])
feature_tensors = torch.from_numpy(np.asarray(features))
target_tensors = torch.from_numpy(np.asarray(targets))
data = TensorDataset(feature_tensors, target_tensors)
data_loader = DataLoader(data, batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]])
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.hidden_dim = hidden_dim
self.n_layers = n_layers
self.embeds = nn.Embedding(vocab_size, embedding_dim)
# define model layers
self.lstm = nn.LSTM(embedding_dim, hidden_dim,
n_layers, dropout=dropout, batch_first=True)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
embeds = self.embeds(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs (convert the output of lstm layer (lstm_out) into a single vector)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
output = self.fc(lstm_out)
# reshape into (batch_size, seq_length, output_size)
output = output.view(batch_size, -1, self.output_size)
# get last batch
out = output[:, -1]
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
# initialize hidden state with zero weights, and move to GPU if available
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if(train_on_gpu):
rnn.cuda()
inp, target = inp.cuda(), target.cuda()
rnn.zero_grad()
# perform backpropagation and optimization
# create new variable for the hidden state, otherwise we'd backprop through the entire training history
hidden = tuple([each.data for each in hidden])
output, hidden = rnn(inp, hidden)
loss = criterion(output, target)
loss.backward()
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 32 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 256
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 5.569646213054657
Epoch: 1/10 Loss: 4.903888876914978
Epoch: 1/10 Loss: 4.672388590335846
Epoch: 1/10 Loss: 4.54434988451004
Epoch: 1/10 Loss: 4.5281686902046205
Epoch: 1/10 Loss: 4.5664872193336485
Epoch: 1/10 Loss: 4.465383111476898
Epoch: 1/10 Loss: 4.343403972148895
Epoch: 1/10 Loss: 4.3210724906921385
Epoch: 1/10 Loss: 4.252272840976715
Epoch: 1/10 Loss: 4.371095730304718
Epoch: 1/10 Loss: 4.3912276711463925
Epoch: 1/10 Loss: 4.391727335453034
Epoch: 2/10 Loss: 4.184920583628426
Epoch: 2/10 Loss: 4.0109624605178835
Epoch: 2/10 Loss: 3.9138201785087587
Epoch: 2/10 Loss: 3.880740828990936
Epoch: 2/10 Loss: 3.9042025117874144
Epoch: 2/10 Loss: 4.008398723125458
Epoch: 2/10 Loss: 3.9446401982307435
Epoch: 2/10 Loss: 3.8605413846969605
Epoch: 2/10 Loss: 3.842089657783508
Epoch: 2/10 Loss: 3.784481876850128
Epoch: 2/10 Loss: 3.9156136593818665
Epoch: 2/10 Loss: 3.926361674785614
Epoch: 2/10 Loss: 3.9235469045639038
Epoch: 3/10 Loss: 3.84044204098134
Epoch: 3/10 Loss: 3.7677185273170473
Epoch: 3/10 Loss: 3.695485861778259
Epoch: 3/10 Loss: 3.6573827176094054
Epoch: 3/10 Loss: 3.680353585243225
Epoch: 3/10 Loss: 3.781810849189758
Epoch: 3/10 Loss: 3.739059875011444
Epoch: 3/10 Loss: 3.6542064056396484
Epoch: 3/10 Loss: 3.6418440165519717
Epoch: 3/10 Loss: 3.5999845514297486
Epoch: 3/10 Loss: 3.7204623069763185
Epoch: 3/10 Loss: 3.709243686676025
Epoch: 3/10 Loss: 3.7199526662826536
Epoch: 4/10 Loss: 3.6580167336404816
Epoch: 4/10 Loss: 3.5927111444473265
Epoch: 4/10 Loss: 3.5276292357444765
Epoch: 4/10 Loss: 3.4953991475105286
Epoch: 4/10 Loss: 3.5144025554656984
Epoch: 4/10 Loss: 3.6310911202430725
Epoch: 4/10 Loss: 3.6029968276023863
Epoch: 4/10 Loss: 3.5077578992843628
Epoch: 4/10 Loss: 3.4986249899864195
Epoch: 4/10 Loss: 3.4666494555473326
Epoch: 4/10 Loss: 3.600983817577362
Epoch: 4/10 Loss: 3.573426197052002
Epoch: 4/10 Loss: 3.605087466239929
Epoch: 5/10 Loss: 3.5346858205874105
Epoch: 5/10 Loss: 3.4853166971206666
Epoch: 5/10 Loss: 3.4185337285995483
Epoch: 5/10 Loss: 3.397120719909668
Epoch: 5/10 Loss: 3.404377478122711
Epoch: 5/10 Loss: 3.5253946480751037
Epoch: 5/10 Loss: 3.4875299983024597
Epoch: 5/10 Loss: 3.405537743091583
Epoch: 5/10 Loss: 3.3958568601608277
Epoch: 5/10 Loss: 3.364429218292236
Epoch: 5/10 Loss: 3.5005551533699037
Epoch: 5/10 Loss: 3.4673416152000427
Epoch: 5/10 Loss: 3.4864660873413085
Epoch: 6/10 Loss: 3.4507964955381127
Epoch: 6/10 Loss: 3.405338900089264
Epoch: 6/10 Loss: 3.3421392107009886
Epoch: 6/10 Loss: 3.3130164761543273
Epoch: 6/10 Loss: 3.325946174621582
Epoch: 6/10 Loss: 3.438218214035034
Epoch: 6/10 Loss: 3.406750172138214
Epoch: 6/10 Loss: 3.32884024477005
Epoch: 6/10 Loss: 3.317576075553894
Epoch: 6/10 Loss: 3.293105420589447
Epoch: 6/10 Loss: 3.4282815790176393
Epoch: 6/10 Loss: 3.388278433799744
Epoch: 6/10 Loss: 3.4071144456863403
Epoch: 7/10 Loss: 3.3834101939496914
Epoch: 7/10 Loss: 3.3443531384468077
Epoch: 7/10 Loss: 3.277641586780548
Epoch: 7/10 Loss: 3.2539846467971802
Epoch: 7/10 Loss: 3.263275703907013
Epoch: 7/10 Loss: 3.3695823040008546
Epoch: 7/10 Loss: 3.3457471075057983
Epoch: 7/10 Loss: 3.285442876338959
Epoch: 7/10 Loss: 3.260405200004578
Epoch: 7/10 Loss: 3.2375488934516907
Epoch: 7/10 Loss: 3.370436939239502
Epoch: 7/10 Loss: 3.3361376304626464
Epoch: 7/10 Loss: 3.341608226776123
Epoch: 8/10 Loss: 3.32905676739275
Epoch: 8/10 Loss: 3.2985602645874024
Epoch: 8/10 Loss: 3.2299773263931275
Epoch: 8/10 Loss: 3.20190758228302
Epoch: 8/10 Loss: 3.2117767534255983
Epoch: 8/10 Loss: 3.3105455675125124
Epoch: 8/10 Loss: 3.2984149770736693
Epoch: 8/10 Loss: 3.2341997981071473
Epoch: 8/10 Loss: 3.204292078971863
Epoch: 8/10 Loss: 3.1903437275886537
Epoch: 8/10 Loss: 3.3155577182769775
Epoch: 8/10 Loss: 3.2859347643852233
Epoch: 8/10 Loss: 3.2892895097732544
Epoch: 9/10 Loss: 3.2839157275917117
Epoch: 9/10 Loss: 3.2492265133857727
Epoch: 9/10 Loss: 3.1888844327926638
Epoch: 9/10 Loss: 3.1661140813827515
Epoch: 9/10 Loss: 3.1664534668922424
Epoch: 9/10 Loss: 3.270862488269806
Epoch: 9/10 Loss: 3.2628838171958923
Epoch: 9/10 Loss: 3.1905170602798463
Epoch: 9/10 Loss: 3.1589288034439087
Epoch: 9/10 Loss: 3.1527077651023863
Epoch: 9/10 Loss: 3.2745649905204774
Epoch: 9/10 Loss: 3.256315386772156
Epoch: 9/10 Loss: 3.2581371936798096
Epoch: 10/10 Loss: 3.2420642220776927
Epoch: 10/10 Loss: 3.2144362425804136
Epoch: 10/10 Loss: 3.1593434796333315
Epoch: 10/10 Loss: 3.1309784088134767
Epoch: 10/10 Loss: 3.128321165084839
Epoch: 10/10 Loss: 3.2270213775634766
Epoch: 10/10 Loss: 3.2273673377037047
Epoch: 10/10 Loss: 3.1523736033439635
Epoch: 10/10 Loss: 3.118960084915161
Epoch: 10/10 Loss: 3.119816872596741
Epoch: 10/10 Loss: 3.23650101852417
Epoch: 10/10 Loss: 3.2255920367240907
Epoch: 10/10 Loss: 3.213144190788269
Model Trained and Saved
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:**- sequence_length: It should be about the size of the length of sentences you want to look at before you generate the next word. I tried the value of 8, 16, 32. The longer the sequence length, the more complicated the model. I prefer to choose 32. - batch_size: I have tried 64, 128, 256, and found 128 is good size for my local machine GPU memory.- embedding_dim: Too small value may cause too much dimension reduction and lose important information. Too large value may lead to a very complicated model and become hard to train. I have tried 128, 256, and 512. I prefer using 256.- hidden_dim: the large value may lead to some kind of overfitting. But for TV script Generation, this is not a big issue, as far as it can generate interesting scripts. Actually people prefer getting some kind of surprise reading TV scripts. I tried 128, 256, and 512. I like to choose 256, as far as it does not generate too strange scripts.- n_layers: the general choice of the number of layers in a GRU/LSTM is 1, 2 or 3. I adopt 2 to make it simple.- num_epochs: it's a value to get reasonable trained parameters and should stop training early to avoid overfitting. I use 10 and get pretty good training result. - learning_rate: If it converges too slow, I will increase the learning rate. If the loss fluctuates, it is the time to reduce learning rate. I choose 0.001, as it work well in general. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'george' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
george: como upon a plane.
elaine: oh, i don't want to get the picture of nbc!
jerry: so what?
elaine: what?
kramer: well, i guess i was watchin' mortal como.
jerry: what are you doin'?
kramer: no no no! no problem!
stu: you know, i was a jackass. it's a lovely boy.
hoyt: so, how do you know about the plane?
george: i don't know.
elaine:(to jerry) what is that?
jerry: so what? what do you say, 'no, and the female bubble is a victim, starving?
george: yes!
hoyt: i can't believe that was the defendants- talker call the moops!
hoyt: so?
hoyt: so you were watchin' your mind.
hoyt: so, uh, you want to come back?
hoyt: and then, uh, you want to go to a library cop?
hoyt:(pointing at jerry) you know, the whole victim is going to be held accountable.
chiles: i think this is a good time of nbc.
jerry: you know what the defendants are in here?
george: i was in the bathroom and i pretended that was the most explanation.
jerry: you know what?
jerry: what do you mean?
elaine: well i am going to call jill.
hoyt: so how about abandoning the plane..
hoyt: what?
elaine: what are you doing here?
jerry: well, i think we should be in a mood.
hoyt:(pointing in disgust and vigorously object to the bathroom) oh, i think so...
george: you know what, what is it?
elaine:(pointing to the phone) well, i think i can do that.
chiles: oh...
hoyt: i was screamin'.
chiles: i can't see this.
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Imports
###Code
import numpy as np
import pandas as pd
import collections
###Output
_____no_output_____
###Markdown
Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
word_counts = collections.Counter(text)
# sorting the words from most to least frequent in text occurrence
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
# create int_to_vocab dictionaries
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
tokens = dict()
tokens['.'] = '<PERIOD>'
tokens[','] = '<COMMA>'
tokens['"'] = '<QUOTATION_MARK>'
tokens[';'] = '<SEMICOLON>'
tokens['!'] = '<EXCLAMATION_MARK>'
tokens['?'] = '<QUESTION_MARK>'
tokens['('] = '<LEFT_PAREN>'
tokens[')'] = '<RIGHT_PAREN>'
tokens['?'] = '<QUESTION_MARK>'
tokens['-'] = '<DASH>'
tokens['\n'] = '<NEW_LINE>'
return tokens
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
batch_size_total = batch_size * sequence_length
# total number of batches we can make
n_batches = len(words)//batch_size_total
words = words[:n_batches * batch_size_total]
# Reshape into batch_size rows
#words = words.reshape((batch_size, -1))
#print(words.shape)
n = len(words) - sequence_length
x, y = [], []
for idx in range(0, n):
idx_end = sequence_length + idx
x_batch = words[idx:idx_end]
x.append(x_batch)
# print("feature: ",x_batch)
y_batch = words[idx_end]
# print("target: ", batch_y)
y.append(y_batch)
# create Tensor datasets
data = TensorDataset(torch.from_numpy(np.asarray(x)), torch.from_numpy(np.asarray(y)))
# make sure the SHUFFLE your training data
data_loader = DataLoader(data, shuffle=False, batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
import numpy as np
import torch
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]])
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define embedding layer
self.embedding = nn.Embedding(vocab_size, embedding_dim)
## Define the LSTM
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
# Define the final, fully-connected output layer
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
batch_size = nn_input.size(0)
# embeddings and lstm_out
input_embedding = self.embedding(nn_input)
lstm_out, hidden = self.lstm(input_embedding, hidden)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
out = self.fc(lstm_out)
# reshape into (batch_size, seq_length, output_size)
out = out.view(batch_size, -1, self.output_size)
# get last batch
out = out[:, -1]
return out, hidden
def init_hidden(self, batch_size):
''' Initializes hidden state '''
# Create two new tensors with sizes n_layers x batch_size x n_hidden,
# initialized to zero, for hidden state and cell state of LSTM
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# move data and model to GPU, if available
if(train_on_gpu):
inp, target = inp.cuda(), target.cuda()
rnn.cuda()
#set zero grad
rnn.zero_grad()
# detach hidden state from history
h = tuple([each.data for each in hidden])
# perform backpropagation and optimization
# get predicted outputs
output, h = rnn(inp, h)
# calculate loss
loss = criterion(output, target)
# backward prop
loss.backward()
# 'clip_grad_norm' helps prevent the exploding gradient problem in RNNs / LSTMs
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), h
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 2000
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 4.896044356584549
Epoch: 1/10 Loss: 4.472833939909935
Epoch: 1/10 Loss: 4.330938710570336
Epoch: 2/10 Loss: 4.09480341168101
Epoch: 2/10 Loss: 3.922113653898239
Epoch: 2/10 Loss: 3.892978543639183
Epoch: 3/10 Loss: 3.794577428802279
Epoch: 3/10 Loss: 3.7095399137735368
Epoch: 3/10 Loss: 3.6991673481464384
Epoch: 4/10 Loss: 3.6337133000210438
Epoch: 4/10 Loss: 3.5685569670200348
Epoch: 4/10 Loss: 3.565621659874916
Epoch: 5/10 Loss: 3.5113238398792697
Epoch: 5/10 Loss: 3.4785272579193114
Epoch: 5/10 Loss: 3.4706383802890777
Epoch: 6/10 Loss: 3.42234632452423
Epoch: 6/10 Loss: 3.3891663069725038
Epoch: 6/10 Loss: 3.3913224357366563
Epoch: 7/10 Loss: 3.3571709921006296
Epoch: 7/10 Loss: 3.3278514384031297
Epoch: 7/10 Loss: 3.3307785955667497
Epoch: 8/10 Loss: 3.3022590637529325
Epoch: 8/10 Loss: 3.2749763374328613
Epoch: 8/10 Loss: 3.273710937023163
Epoch: 9/10 Loss: 3.255635271446251
Epoch: 9/10 Loss: 3.2312054147720337
Epoch: 9/10 Loss: 3.2289528781175614
Epoch: 10/10 Loss: 3.2181485639548293
Epoch: 10/10 Loss: 3.19440953707695
Epoch: 10/10 Loss: 3.192575043082237
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** - I kept the `sequence_lengths` and `n_layers` fixed and `10` and `2` respectively- I decided to experiement with batch_size and hidden_dimension parameters. From my previous experience with Char RNN models, I have seen `batch_size= 32` and `hidden_dim = 64` generally works well. So I started with those parameters along with other parameters. My loss was `5.9` and came down to `4.2` in `6` epochs but realized they are not coming down soon enough and/or there are some swing- Because the dataset we have here is larger than my previous Char RNN projects and also I wanted to see if a faster converge to desired minimum loss of `3.5` is possible or not. I then made `batch_size = 128` and `hidden_dim = 256`. This change was promising as the `loss` started with `4.896` and by epoch 6 it came down to `3.389` and eventually at the end of `10` epoch got to `3.192` --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:42: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
_____no_output_____
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
# return tuple
return (None, None)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
_____no_output_____
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
_____no_output_____
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# return a dataloader
return None
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
_____no_output_____
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
# define model layers
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# return one batch of output word scores and the hidden state
return None, None
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
_____no_output_____
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
# perform backpropagation and optimization
# return the loss over a batch and the hidden state produced by our model
return None, None
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
_____no_output_____
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = # of words in a sequence
# Batch Size
batch_size =
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs =
# Learning Rate
learning_rate =
# Model parameters
# Vocab size
vocab_size =
# Output size
output_size =
# Embedding Dimension
embedding_dim =
# Hidden Dimension
hidden_dim =
# Number of RNN Layers
n_layers =
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
_____no_output_____
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
_____no_output_____
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
unique_words = len({word: None for word in text.split()})
print('Roughly the number of unique words: {}'.format(unique_words))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# DONE: Implement Function
## Create dictionary "vocab_to_int" to go from the words to an id
# create unique list of words
unique_words = list(set(text))
print(len(unique_words))
# map unique words to id in dictionary
vocab_to_int = {word: idx for idx, word in enumerate(unique_words)}
## Create dictionary "int_to_vocab" to go from the id to word
# map unique id to word in dictionary
int_to_vocab = {idx: word for idx, word in enumerate(unique_words)}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
71
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# DONE: Implement Function
punc_dict = {"." : "||Period||",
"," : "||Comma||",
'"' : "||Quotation_Mark||",
";" : "||Semicolon||",
"!" : "||Exclamation_Mark||",
"?" : "||Question_Mark||",
"(" : "||Left_Parentheses||",
")" : "||Righ_Parentheses||",
"-" : "||Dash||",
"\n" : "||Return||",
}
return punc_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
21388
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
import numpy as np
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# calculate number of words and batches
n_words = len(words)
n_batches = n_words - sequence_length
# # keep only enough words to make full batches
# words = words[:n_batches * batch_size * ]
## instantiate feature_tensor and target_tensor as black zero numpy arrays
# feature_tensor size will be [n_batches, sequence_length]
feature_tensors = np.zeros((n_batches, sequence_length), dtype=int)
# target_tensor size will be [n_batches, 1]
target_tensors = np.zeros((n_batches, 1), dtype=int)
print("n_batches ", n_batches)
print("target shaep ", np.shape(target_tensors))
## populate the feature_tensor and target_tensor
for i in range(0, n_batches):
#print(i)
feature_tensors[i] = words[i:i + sequence_length]
#print(feature_tensor[i])
target_tensors[i] = words[i + sequence_length]
#print(target_tensor[i])
print("final_words ", words[-1])
print("feature_tensor ", feature_tensors[n_batches-1])
print("target_tensor ", target_tensors[n_batches-1])
# TODO: need to convert the numpy array into pytorch tensor
feature_tensors = torch.from_numpy(feature_tensors)
target_tensors = torch.from_numpy(target_tensors)
data = TensorDataset(feature_tensors, target_tensors)
data_loader = torch.utils.data.DataLoader(data,
batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
batch_data(int_text, 10, 5)
###Output
n_batches 892100
target shaep (892100, 1)
final_words 17407
feature_tensor [ 2165 13213 16876 2534 13168 15051 12836 14361 7156 17407]
target_tensor [17407]
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
n_batches 45
target shaep (45, 1)
final_words 49
feature_tensor [44 45 46 47 48]
target_tensor [49]
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]])
torch.Size([10, 1])
tensor([[ 5],
[ 6],
[ 7],
[ 8],
[ 9],
[ 10],
[ 11],
[ 12],
[ 13],
[ 14]])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.hidden_dim = hidden_dim
self.n_layers = n_layers
## define model layers
# embedding and LSTM layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
# dropout layer
self.dropout = nn.Dropout(dropout)
# linear and sigmoid layers
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# return one batch of output word scores and the hidden state
# get batch size
batch_size = nn_input.size(0)
# embedding and LSTM layers
x = nn_input.long()
embeds = self.embedding(x)
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm ouutputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
#out = self.dropout(lstm_out)
#out = self.fc(out)
out = self.fc(lstm_out)
out = out.view(batch_size, -1, self.output_size)
out = out[:, -1]
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if train_on_gpu:
inp, target = inp.cuda(), target.cuda()
# create new variables for the hidden state to avoid backprop trhough entire training history
hidden = tuple([each.data for each in hidden])
# set gradient to 0
rnn.zero_grad()
# update target to squeeze into a single dimension #REVISE
target = target.squeeze(1)
# perform backpropagation and optimization
output, h = rnn(inp, hidden)
# print("inp: ", inp.size(), "target: ", target.size())
loss = criterion(output.squeeze(), target.long())
loss.backward(retain_graph=True)
# use clip to prevent exploding gradient
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
#loss = float(loss)
# return the loss over a batch and the hidden state produced by our model
return loss.item(), h
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
#tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
_____no_output_____
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
from workspace_utils import active_session
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
# prevent the loop from timing out with active_session()
with active_session():
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence, adjusting from 5 to see impact on traing. 5 had loss of 4
# Batch Size
batch_size = 128 # kept running of memory at higher values
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = unique_words
# Output size
output_size = vocab_size + 1
# Embedding Dimension
embedding_dim = 400
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 5.714723686218262
Epoch: 1/10 Loss: 4.947399044513703
Epoch: 1/10 Loss: 4.6830759882926944
Epoch: 1/10 Loss: 4.566186357021332
Epoch: 1/10 Loss: 4.574236698150635
Epoch: 1/10 Loss: 4.598560484409332
Epoch: 1/10 Loss: 4.500714970588684
Epoch: 1/10 Loss: 4.386897342681885
Epoch: 1/10 Loss: 4.3637693424224855
Epoch: 1/10 Loss: 4.294235398292542
Epoch: 1/10 Loss: 4.416331884860992
Epoch: 1/10 Loss: 4.447272889614105
Epoch: 1/10 Loss: 4.44886888551712
Epoch: 2/10 Loss: 4.22156453649326
Epoch: 2/10 Loss: 4.019749310493469
Epoch: 2/10 Loss: 3.9229512248039247
Epoch: 2/10 Loss: 3.8771452169418334
Epoch: 2/10 Loss: 3.9142372097969056
Epoch: 2/10 Loss: 4.007809993743897
Epoch: 2/10 Loss: 3.9367793407440184
Epoch: 2/10 Loss: 3.867321635246277
Epoch: 2/10 Loss: 3.8507271032333374
Epoch: 2/10 Loss: 3.8097642946243284
Epoch: 2/10 Loss: 3.9510108041763305
Epoch: 2/10 Loss: 3.969310088634491
Epoch: 2/10 Loss: 3.9876924748420715
Epoch: 3/10 Loss: 3.867785342583592
Epoch: 3/10 Loss: 3.7681717638969423
Epoch: 3/10 Loss: 3.702448328495026
Epoch: 3/10 Loss: 3.658593469619751
Epoch: 3/10 Loss: 3.673355776309967
Epoch: 3/10 Loss: 3.773371073246002
Epoch: 3/10 Loss: 3.7327209677696227
Epoch: 3/10 Loss: 3.6552824969291686
Epoch: 3/10 Loss: 3.651632721424103
Epoch: 3/10 Loss: 3.636891181945801
Epoch: 3/10 Loss: 3.752034731388092
Epoch: 3/10 Loss: 3.7687638039588927
Epoch: 3/10 Loss: 3.7861446509361265
Epoch: 4/10 Loss: 3.692302106949813
Epoch: 4/10 Loss: 3.606812599658966
Epoch: 4/10 Loss: 3.5411606884002684
Epoch: 4/10 Loss: 3.519571429729462
Epoch: 4/10 Loss: 3.53412579202652
Epoch: 4/10 Loss: 3.6380978260040284
Epoch: 4/10 Loss: 3.589286687850952
Epoch: 4/10 Loss: 3.510832920074463
Epoch: 4/10 Loss: 3.51266285610199
Epoch: 4/10 Loss: 3.5048670778274538
Epoch: 4/10 Loss: 3.621937972545624
Epoch: 4/10 Loss: 3.66049054479599
Epoch: 4/10 Loss: 3.6557118601799012
Epoch: 5/10 Loss: 3.578338870077064
Epoch: 5/10 Loss: 3.501653299808502
Epoch: 5/10 Loss: 3.4428087730407713
Epoch: 5/10 Loss: 3.4347814507484435
Epoch: 5/10 Loss: 3.442070734500885
Epoch: 5/10 Loss: 3.5404933252334594
Epoch: 5/10 Loss: 3.4917204933166506
Epoch: 5/10 Loss: 3.426301456928253
Epoch: 5/10 Loss: 3.4189863348007203
Epoch: 5/10 Loss: 3.409524739742279
Epoch: 5/10 Loss: 3.525466704368591
Epoch: 5/10 Loss: 3.544346896648407
Epoch: 5/10 Loss: 3.5465411224365235
Epoch: 6/10 Loss: 3.488031313761346
Epoch: 6/10 Loss: 3.4313817591667175
Epoch: 6/10 Loss: 3.3624429535865783
Epoch: 6/10 Loss: 3.3602363324165343
Epoch: 6/10 Loss: 3.356619673252106
Epoch: 6/10 Loss: 3.464830176830292
Epoch: 6/10 Loss: 3.4014710030555726
Epoch: 6/10 Loss: 3.3436798100471496
Epoch: 6/10 Loss: 3.336275463104248
Epoch: 6/10 Loss: 3.3384783935546873
Epoch: 6/10 Loss: 3.4377746348381044
Epoch: 6/10 Loss: 3.4658451790809632
Epoch: 6/10 Loss: 3.482286964893341
Epoch: 7/10 Loss: 3.4236547371182278
Epoch: 7/10 Loss: 3.3627661385536194
Epoch: 7/10 Loss: 3.3065256156921388
Epoch: 7/10 Loss: 3.3031564240455626
Epoch: 7/10 Loss: 3.2919883036613466
Epoch: 7/10 Loss: 3.408005521774292
Epoch: 7/10 Loss: 3.3372538013458253
Epoch: 7/10 Loss: 3.282076304912567
Epoch: 7/10 Loss: 3.280915585041046
Epoch: 7/10 Loss: 3.2877967281341554
Epoch: 7/10 Loss: 3.372384461402893
Epoch: 7/10 Loss: 3.393191682815552
Epoch: 7/10 Loss: 3.400779527664185
Epoch: 8/10 Loss: 3.3664774050776556
Epoch: 8/10 Loss: 3.310150158405304
Epoch: 8/10 Loss: 3.258916851043701
Epoch: 8/10 Loss: 3.2518119635581972
Epoch: 8/10 Loss: 3.2393870911598204
Epoch: 8/10 Loss: 3.35853059053421
Epoch: 8/10 Loss: 3.289390298843384
Epoch: 8/10 Loss: 3.2308204183578493
Epoch: 8/10 Loss: 3.2316478242874145
Epoch: 8/10 Loss: 3.2444359769821167
Epoch: 8/10 Loss: 3.317259634971619
Epoch: 8/10 Loss: 3.3405857963562013
Epoch: 8/10 Loss: 3.337912019729614
Epoch: 9/10 Loss: 3.3225249766811373
Epoch: 9/10 Loss: 3.2688573575019837
Epoch: 9/10 Loss: 3.2203273429870607
Epoch: 9/10 Loss: 3.219068323135376
Epoch: 9/10 Loss: 3.1970218787193296
Epoch: 9/10 Loss: 3.3108576860427856
Epoch: 9/10 Loss: 3.2424683537483214
Epoch: 9/10 Loss: 3.188258267879486
Epoch: 9/10 Loss: 3.188338352203369
Epoch: 9/10 Loss: 3.201212610244751
Epoch: 9/10 Loss: 3.2708193316459657
Epoch: 9/10 Loss: 3.305390962600708
Epoch: 9/10 Loss: 3.2952378735542296
Epoch: 10/10 Loss: 3.2804403910326885
Epoch: 10/10 Loss: 3.240860106945038
Epoch: 10/10 Loss: 3.1989309272766113
Epoch: 10/10 Loss: 3.188296244621277
Epoch: 10/10 Loss: 3.160790126800537
Epoch: 10/10 Loss: 3.2704716300964356
Epoch: 10/10 Loss: 3.204473289012909
Epoch: 10/10 Loss: 3.15122399520874
Epoch: 10/10 Loss: 3.150492645740509
Epoch: 10/10 Loss: 3.1670052223205567
Epoch: 10/10 Loss: 3.2323387851715086
Epoch: 10/10 Loss: 3.270649323940277
Epoch: 10/10 Loss: 3.2512613244056703
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:**I mostly used the default hyperparameter values I gathered from different lessons in the Deep Learning nanodegree, and otherwised referenced best practices to select starting points and ranges.I experimented with the following:1. batch_size - I started with 256 which exhaustd the memory so I adjusted down to 1282. num_epochs - I started with 5 but it seemed the loss was hardly getting close to the deisred minimum so I adjusted upawards to 103. n_layers - I tested 3 layers but found no difference to 2 layers so I reverted back to 2 layers4. sequence_length - I started off with a sequence length of 5, but the loss was not getting close to the desired 3.5 value. Once I tested the sequence length at 10, my model had no trouble converging. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:50: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_counts = Counter(text)
# sort the words from most to least frequent
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
# create dictionaries
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
punct_to_token = {
'.': '||PERIOD||',
',': '||COMMA||',
'"': '||QUOTATION_MARK||',
';': '||SEMICOLON||',
'!': '||EXCLAMATION_MARK||',
'?': '||QUESTION_MARK||',
'(': '||LEFT_PAREN||',
')': '||RIGHT_PAREN||',
'-': '||DASH||',
'\n': '||RETURN||'
}
return punct_to_token
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
feature_tensors, target_tensors = [], []
for i in range(len(words)):
target_idx = i + sequence_length
if target_idx < len(words):
features = words[i:i + sequence_length]
feature_tensors.append(features)
target = words[target_idx]
target_tensors.append(target)
# convert to tensor
feature_tensors = torch.from_numpy(np.asarray(feature_tensors))
target_tensors = torch.from_numpy(np.asarray(target_tensors))
data = TensorDataset(feature_tensors, target_tensors)
data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size, shuffle=True)
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 10, 11, 12, 13, 14],
[ 39, 40, 41, 42, 43],
[ 5, 6, 7, 8, 9],
[ 32, 33, 34, 35, 36],
[ 23, 24, 25, 26, 27],
[ 14, 15, 16, 17, 18],
[ 6, 7, 8, 9, 10],
[ 36, 37, 38, 39, 40],
[ 22, 23, 24, 25, 26],
[ 3, 4, 5, 6, 7]])
torch.Size([10])
tensor([ 15, 44, 10, 37, 28, 19, 11, 41, 27, 8])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# embedding and LSTM layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
#linear layers
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
# embeddings and lstm_out
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# fully-connected layer
out = self.fc(lstm_out)
# reshape to be (batch_size, seq_length, output_size)
out = out.view(batch_size, -1, self.output_size)
out = out[:, -1] # get last batch of labels
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if(train_on_gpu):
inp, target = inp.cuda(), target.cuda()
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
hidden = tuple([each.data for each in hidden])
# zero accumulated gradients
rnn.zero_grad()
# get the output from the model
output, hidden = rnn(inp, hidden)
# perform backpropagation and optimization
loss = criterion(output, target)
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10# of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 20
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 20 epoch(s)...
Epoch: 1/20 Loss: 5.529391175270081
Epoch: 1/20 Loss: 4.864750448226928
Epoch: 1/20 Loss: 4.673917549610138
Epoch: 1/20 Loss: 4.53624011516571
Epoch: 1/20 Loss: 4.43394695186615
Epoch: 1/20 Loss: 4.391361342906952
Epoch: 1/20 Loss: 4.349459547996521
Epoch: 1/20 Loss: 4.293628051280975
Epoch: 1/20 Loss: 4.264595439434052
Epoch: 1/20 Loss: 4.2329872598648075
Epoch: 1/20 Loss: 4.219681046485901
Epoch: 1/20 Loss: 4.176609238624573
Epoch: 1/20 Loss: 4.173091259479523
Epoch: 2/20 Loss: 4.0620111695753165
Epoch: 2/20 Loss: 3.97152907705307
Epoch: 2/20 Loss: 3.9748258848190305
Epoch: 2/20 Loss: 3.949656668663025
Epoch: 2/20 Loss: 3.9572295794487
Epoch: 2/20 Loss: 3.9438273811340334
Epoch: 2/20 Loss: 3.9231659355163573
Epoch: 2/20 Loss: 3.925105185031891
Epoch: 2/20 Loss: 3.909913544654846
Epoch: 2/20 Loss: 3.933670042037964
Epoch: 2/20 Loss: 3.9098702807426453
Epoch: 2/20 Loss: 3.9223010420799254
Epoch: 2/20 Loss: 3.938464592933655
Epoch: 3/20 Loss: 3.804339567082093
Epoch: 3/20 Loss: 3.757045175075531
Epoch: 3/20 Loss: 3.7410627422332765
Epoch: 3/20 Loss: 3.746852583408356
Epoch: 3/20 Loss: 3.7452759342193604
Epoch: 3/20 Loss: 3.7707648339271547
Epoch: 3/20 Loss: 3.7618724212646484
Epoch: 3/20 Loss: 3.760535253047943
Epoch: 3/20 Loss: 3.7732520866394044
Epoch: 3/20 Loss: 3.755405373573303
Epoch: 3/20 Loss: 3.740365430831909
Epoch: 3/20 Loss: 3.751500138282776
Epoch: 3/20 Loss: 3.7737027554512026
Epoch: 4/20 Loss: 3.6998141119477674
Epoch: 4/20 Loss: 3.617857692718506
Epoch: 4/20 Loss: 3.6174797143936157
Epoch: 4/20 Loss: 3.6146593928337096
Epoch: 4/20 Loss: 3.634882091999054
Epoch: 4/20 Loss: 3.6236209311485292
Epoch: 4/20 Loss: 3.656707974910736
Epoch: 4/20 Loss: 3.649324120521545
Epoch: 4/20 Loss: 3.635141547203064
Epoch: 4/20 Loss: 3.6623518476486208
Epoch: 4/20 Loss: 3.647887206554413
Epoch: 4/20 Loss: 3.666522166252136
Epoch: 4/20 Loss: 3.656830111026764
Epoch: 5/20 Loss: 3.5839353879784897
Epoch: 5/20 Loss: 3.534105875968933
Epoch: 5/20 Loss: 3.5169877190589904
Epoch: 5/20 Loss: 3.5174399967193604
Epoch: 5/20 Loss: 3.558476655960083
Epoch: 5/20 Loss: 3.531831500530243
Epoch: 5/20 Loss: 3.5559655900001528
Epoch: 5/20 Loss: 3.536132891178131
Epoch: 5/20 Loss: 3.565013190746307
Epoch: 5/20 Loss: 3.5810106892585756
Epoch: 5/20 Loss: 3.572405120372772
Epoch: 5/20 Loss: 3.5877698526382447
Epoch: 5/20 Loss: 3.597608308315277
Epoch: 6/20 Loss: 3.5061968187306563
Epoch: 6/20 Loss: 3.4279884305000303
Epoch: 6/20 Loss: 3.440210905075073
Epoch: 6/20 Loss: 3.470801765918732
Epoch: 6/20 Loss: 3.4496021003723145
Epoch: 6/20 Loss: 3.482304501533508
Epoch: 6/20 Loss: 3.484525243282318
Epoch: 6/20 Loss: 3.503444883823395
Epoch: 6/20 Loss: 3.498856789112091
Epoch: 6/20 Loss: 3.5078747344017027
Epoch: 6/20 Loss: 3.522995020389557
Epoch: 6/20 Loss: 3.5332308502197267
Epoch: 6/20 Loss: 3.529061026096344
Epoch: 7/20 Loss: 3.459417881488308
Epoch: 7/20 Loss: 3.384156697273254
Epoch: 7/20 Loss: 3.389166923522949
Epoch: 7/20 Loss: 3.398777552127838
Epoch: 7/20 Loss: 3.4103299684524537
Epoch: 7/20 Loss: 3.424512595176697
Epoch: 7/20 Loss: 3.4071421575546266
Epoch: 7/20 Loss: 3.445137722969055
Epoch: 7/20 Loss: 3.445871344566345
Epoch: 7/20 Loss: 3.449563290596008
Epoch: 7/20 Loss: 3.4708132734298704
Epoch: 7/20 Loss: 3.4714629349708557
Epoch: 7/20 Loss: 3.475574741363525
Epoch: 8/20 Loss: 3.4081745054207597
Epoch: 8/20 Loss: 3.3436727862358095
Epoch: 8/20 Loss: 3.354399088382721
Epoch: 8/20 Loss: 3.3449832668304444
Epoch: 8/20 Loss: 3.3495688972473143
Epoch: 8/20 Loss: 3.3638466124534605
Epoch: 8/20 Loss: 3.38108523273468
Epoch: 8/20 Loss: 3.4094437856674196
Epoch: 8/20 Loss: 3.4086391820907593
Epoch: 8/20 Loss: 3.4086695485115053
Epoch: 8/20 Loss: 3.421333518028259
Epoch: 8/20 Loss: 3.4205390634536745
Epoch: 8/20 Loss: 3.4438080477714537
Epoch: 9/20 Loss: 3.35868846890358
Epoch: 9/20 Loss: 3.2935498371124265
Epoch: 9/20 Loss: 3.28726358795166
Epoch: 9/20 Loss: 3.311820751667023
Epoch: 9/20 Loss: 3.3329122214317324
Epoch: 9/20 Loss: 3.341483960151672
Epoch: 9/20 Loss: 3.3512190375328066
Epoch: 9/20 Loss: 3.336891372680664
Epoch: 9/20 Loss: 3.3644563784599306
Epoch: 9/20 Loss: 3.3801354532241823
Epoch: 9/20 Loss: 3.39298441028595
Epoch: 9/20 Loss: 3.3924070143699647
Epoch: 9/20 Loss: 3.402356596946716
Epoch: 10/20 Loss: 3.333333688377719
Epoch: 10/20 Loss: 3.2606058802604676
Epoch: 10/20 Loss: 3.264059374809265
Epoch: 10/20 Loss: 3.2967714405059816
Epoch: 10/20 Loss: 3.2851467266082763
Epoch: 10/20 Loss: 3.306796584606171
Epoch: 10/20 Loss: 3.305581615447998
Epoch: 10/20 Loss: 3.3108429794311522
Epoch: 10/20 Loss: 3.320066169261932
Epoch: 10/20 Loss: 3.3418579959869383
Epoch: 10/20 Loss: 3.3553106684684755
Epoch: 10/20 Loss: 3.389110330581665
Epoch: 10/20 Loss: 3.3771179237365723
Epoch: 11/20 Loss: 3.291515919200161
Epoch: 11/20 Loss: 3.2240699586868287
Epoch: 11/20 Loss: 3.2339837374687197
Epoch: 11/20 Loss: 3.2528910026550295
Epoch: 11/20 Loss: 3.2731507120132446
Epoch: 11/20 Loss: 3.2790966925621032
Epoch: 11/20 Loss: 3.278383232116699
Epoch: 11/20 Loss: 3.284507378578186
Epoch: 11/20 Loss: 3.3126354699134826
Epoch: 11/20 Loss: 3.3165445895195007
Epoch: 11/20 Loss: 3.3021326088905334
Epoch: 11/20 Loss: 3.336663378715515
Epoch: 11/20 Loss: 3.3606710910797117
Epoch: 12/20 Loss: 3.273235888180964
Epoch: 12/20 Loss: 3.2143123846054076
Epoch: 12/20 Loss: 3.216760028362274
Epoch: 12/20 Loss: 3.2183160047531127
Epoch: 12/20 Loss: 3.2526223793029785
Epoch: 12/20 Loss: 3.253923884868622
Epoch: 12/20 Loss: 3.231957037448883
Epoch: 12/20 Loss: 3.267335345745087
Epoch: 12/20 Loss: 3.26310148191452
Epoch: 12/20 Loss: 3.285992920398712
Epoch: 12/20 Loss: 3.301306200027466
Epoch: 12/20 Loss: 3.3112958788871767
Epoch: 12/20 Loss: 3.308891138076782
Epoch: 13/20 Loss: 3.242131777469096
Epoch: 13/20 Loss: 3.1717548875808714
Epoch: 13/20 Loss: 3.1900500340461733
Epoch: 13/20 Loss: 3.2054759612083434
Epoch: 13/20 Loss: 3.2211250371932985
Epoch: 13/20 Loss: 3.2237357287406923
Epoch: 13/20 Loss: 3.226701593399048
Epoch: 13/20 Loss: 3.242177087306976
Epoch: 13/20 Loss: 3.257656816482544
Epoch: 13/20 Loss: 3.2739972996711733
Epoch: 13/20 Loss: 3.288779035568237
Epoch: 13/20 Loss: 3.286942635059357
Epoch: 13/20 Loss: 3.3027435512542724
Epoch: 14/20 Loss: 3.220484633440819
Epoch: 14/20 Loss: 3.1590835218429567
Epoch: 14/20 Loss: 3.1784450936317445
Epoch: 14/20 Loss: 3.1829410891532897
Epoch: 14/20 Loss: 3.2023180832862854
Epoch: 14/20 Loss: 3.1920323343276977
Epoch: 14/20 Loss: 3.195057466983795
Epoch: 14/20 Loss: 3.2153018436431884
Epoch: 14/20 Loss: 3.233634081363678
Epoch: 14/20 Loss: 3.2512196350097655
Epoch: 14/20 Loss: 3.2593807835578916
Epoch: 14/20 Loss: 3.2833624482154846
Epoch: 14/20 Loss: 3.2832912101745606
Epoch: 15/20 Loss: 3.215326426322
Epoch: 15/20 Loss: 3.134665168762207
Epoch: 15/20 Loss: 3.1522001147270204
Epoch: 15/20 Loss: 3.1754859085083007
Epoch: 15/20 Loss: 3.174431248188019
Epoch: 15/20 Loss: 3.1996604952812193
Epoch: 15/20 Loss: 3.204711798667908
Epoch: 15/20 Loss: 3.1993963813781736
Epoch: 15/20 Loss: 3.2118970527648925
Epoch: 15/20 Loss: 3.220851254463196
Epoch: 15/20 Loss: 3.2454243774414064
Epoch: 15/20 Loss: 3.2423621950149535
Epoch: 15/20 Loss: 3.2676719970703125
Epoch: 16/20 Loss: 3.177448484058597
Epoch: 16/20 Loss: 3.1191557040214537
Epoch: 16/20 Loss: 3.134803343772888
Epoch: 16/20 Loss: 3.1511257400512696
Epoch: 16/20 Loss: 3.168752275466919
Epoch: 16/20 Loss: 3.172501452445984
Epoch: 16/20 Loss: 3.18746435546875
Epoch: 16/20 Loss: 3.193613554477692
Epoch: 16/20 Loss: 3.1950018639564512
Epoch: 16/20 Loss: 3.2123971381187437
Epoch: 16/20 Loss: 3.2215262694358824
Epoch: 16/20 Loss: 3.2384445128440857
Epoch: 16/20 Loss: 3.2336698637008667
Epoch: 17/20 Loss: 3.1798037803579042
Epoch: 17/20 Loss: 3.110266995429993
Epoch: 17/20 Loss: 3.130526102542877
Epoch: 17/20 Loss: 3.1248331561088563
Epoch: 17/20 Loss: 3.1492376255989076
Epoch: 17/20 Loss: 3.1620756893157957
Epoch: 17/20 Loss: 3.1571986265182495
Epoch: 17/20 Loss: 3.1948366208076475
Epoch: 17/20 Loss: 3.1939454264640808
Epoch: 17/20 Loss: 3.204389548301697
Epoch: 17/20 Loss: 3.1888044686317443
Epoch: 17/20 Loss: 3.2138706440925597
Epoch: 17/20 Loss: 3.2258294010162354
Epoch: 18/20 Loss: 3.146411789706125
Epoch: 18/20 Loss: 3.0961286783218385
Epoch: 18/20 Loss: 3.1242588081359863
Epoch: 18/20 Loss: 3.133706639289856
Epoch: 18/20 Loss: 3.1309349694252013
Epoch: 18/20 Loss: 3.144322584629059
Epoch: 18/20 Loss: 3.1536198887825013
Epoch: 18/20 Loss: 3.176272698402405
Epoch: 18/20 Loss: 3.1755652117729185
Epoch: 18/20 Loss: 3.1745694031715392
Epoch: 18/20 Loss: 3.1830272850990293
Epoch: 18/20 Loss: 3.2063993945121765
Epoch: 18/20 Loss: 3.2037946944236757
Epoch: 19/20 Loss: 3.150665195610747
Epoch: 19/20 Loss: 3.087133441448212
Epoch: 19/20 Loss: 3.11333136510849
Epoch: 19/20 Loss: 3.1150451335906983
Epoch: 19/20 Loss: 3.122633202075958
Epoch: 19/20 Loss: 3.1271822962760925
Epoch: 19/20 Loss: 3.1251408529281615
Epoch: 19/20 Loss: 3.145860999584198
Epoch: 19/20 Loss: 3.168108784675598
Epoch: 19/20 Loss: 3.166985785484314
Epoch: 19/20 Loss: 3.170442952632904
Epoch: 19/20 Loss: 3.181114953994751
Epoch: 19/20 Loss: 3.2101287002563477
Epoch: 20/20 Loss: 3.1437751636662594
Epoch: 20/20 Loss: 3.06917157125473
Epoch: 20/20 Loss: 3.0799417219161986
Epoch: 20/20 Loss: 3.0942797131538393
Epoch: 20/20 Loss: 3.092326626777649
Epoch: 20/20 Loss: 3.1178182945251467
Epoch: 20/20 Loss: 3.1420974383354188
Epoch: 20/20 Loss: 3.143783143043518
Epoch: 20/20 Loss: 3.146897301197052
Epoch: 20/20 Loss: 3.166947968482971
Epoch: 20/20 Loss: 3.173344421863556
Epoch: 20/20 Loss: 3.185516931056976
Epoch: 20/20 Loss: 3.1903200421333313
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here)As starting point for the hyperparameters I used values that we used in the previous lessons and exercises. At the end of the day I had not to change them to reach the required loss of 3.5. But to optimize the training a bit, I tried different sequence lengths, learning rates, number of RNN layers, hidden dimensions and embedding dimensions while training for 5 epochs. I tried the following values:sequence lengths: 5, 10, 15learning rate: 0.01, 0.001, 0.0001number of RNN layers: 1, 2, 3hidden dimensions: 64, 128, 256embedding dimensions: 200, 300I choose the ones, which resulted in faster and better decrease in training loss. For the final training step I used 20 epochs. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:43: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# adapted from util.py from word2vec project
word_counts = Counter(text)
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token_dict = {'.':'||Period||',
',':'||Comma||',
'"':'||Quotation_Mark||',
';':'||Semicolon||',
'!':'||Exclamation_Mark||',
'?':'||Question_Mark||',
'(':'||Left_Paren||',
')':'||Right_Paren||',
'-':'||Dash||',
'\n':'||Return||'}
return token_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
No GPU found. Please use a GPU to train your neural network.
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
n_batches = len(words)//batch_size
words = words[:batch_size * n_batches]
x, y = [], []
for idx in range(0, len(words) - sequence_length):
x.append(words[idx:idx + sequence_length])
y.append(words[idx + sequence_length])
feature_tensors = torch.from_numpy(np.asarray(x))
target_tensors = torch.from_numpy(np.asarray(y))
data = TensorDataset(feature_tensors, target_tensors)
data_loader = DataLoader(data, shuffle = False, batch_size = batch_size)
return data_loader
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(53)
t_loader = batch_data(test_text, sequence_length=4, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 4])
tensor([[ 0, 1, 2, 3],
[ 1, 2, 3, 4],
[ 2, 3, 4, 5],
[ 3, 4, 5, 6],
[ 4, 5, 6, 7],
[ 5, 6, 7, 8],
[ 6, 7, 8, 9],
[ 7, 8, 9, 10],
[ 8, 9, 10, 11],
[ 9, 10, 11, 12]], dtype=torch.int32)
torch.Size([10])
tensor([ 4, 5, 6, 7, 8, 9, 10, 11, 12, 13], dtype=torch.int32)
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.vocab_size = vocab_size
self.output_size = output_size
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.n_layers = n_layers
self.dropout = dropout
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout = dropout, batch_first = True)
self.dropout = nn.Dropout(0.3)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
# embeddings and lstm_out
nn_input = nn_input.long()
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
output = self.fc(lstm_out)
output = output.view(batch_size, -1, self.output_size)
out = output[:, -1] # get last batch of labels
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if(train_on_gpu):
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
hidden = tuple([each.data for each in hidden])
rnn.zero_grad()
output, hidden = rnn(inp, hidden)
loss = criterion(output.squeeze(1), target.long())
loss.backward()
clip = 5
nn.utils.clip_grad_norm_(rnn.parameters(), clip)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 8 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 15
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 5.8879834823608395
Epoch: 1/10 Loss: 5.255883387088776
Epoch: 1/10 Loss: 4.952197147369385
Epoch: 1/10 Loss: 4.820415332317352
Epoch: 1/10 Loss: 4.811099704265595
Epoch: 1/10 Loss: 4.852408919334412
Epoch: 1/10 Loss: 4.744156408309936
Epoch: 1/10 Loss: 4.610701610565186
Epoch: 1/10 Loss: 4.5576639690399166
Epoch: 1/10 Loss: 4.496510800838471
Epoch: 1/10 Loss: 4.590415596485138
Epoch: 1/10 Loss: 4.604939558506012
Epoch: 1/10 Loss: 4.596785079956055
Epoch: 2/10 Loss: 4.399568411683248
Epoch: 2/10 Loss: 4.234029729366302
Epoch: 2/10 Loss: 4.1518063282966615
Epoch: 2/10 Loss: 4.118065470218658
Epoch: 2/10 Loss: 4.153740507125854
Epoch: 2/10 Loss: 4.23086633682251
Epoch: 2/10 Loss: 4.164884633541107
Epoch: 2/10 Loss: 4.052936632633209
Epoch: 2/10 Loss: 4.052750252723694
Epoch: 2/10 Loss: 4.0097911176681515
Epoch: 2/10 Loss: 4.1369876337051394
Epoch: 2/10 Loss: 4.127726135730743
Epoch: 2/10 Loss: 4.116226076602936
Epoch: 3/10 Loss: 4.038073611653541
Epoch: 3/10 Loss: 3.9572711324691774
Epoch: 3/10 Loss: 3.8775673389434813
Epoch: 3/10 Loss: 3.854536430835724
Epoch: 3/10 Loss: 3.8949240260124207
Epoch: 3/10 Loss: 3.9791862292289735
Epoch: 3/10 Loss: 3.921848979949951
Epoch: 3/10 Loss: 3.8106048412323
Epoch: 3/10 Loss: 3.810979067325592
Epoch: 3/10 Loss: 3.7926068544387816
Epoch: 3/10 Loss: 3.9088358902931213
Epoch: 3/10 Loss: 3.90287650680542
Epoch: 3/10 Loss: 3.8800569314956666
Epoch: 4/10 Loss: 3.830680659733528
Epoch: 4/10 Loss: 3.7694512376785276
Epoch: 4/10 Loss: 3.7137360949516296
Epoch: 4/10 Loss: 3.695846221446991
Epoch: 4/10 Loss: 3.7254633088111877
Epoch: 4/10 Loss: 3.8081488699913026
Epoch: 4/10 Loss: 3.75006081533432
Epoch: 4/10 Loss: 3.650891854763031
Epoch: 4/10 Loss: 3.6529644889831543
Epoch: 4/10 Loss: 3.6346872572898863
Epoch: 4/10 Loss: 3.7504418897628784
Epoch: 4/10 Loss: 3.747957190036774
Epoch: 4/10 Loss: 3.7438595314025878
Epoch: 5/10 Loss: 3.684265803453351
Epoch: 5/10 Loss: 3.6375495796203614
Epoch: 5/10 Loss: 3.5883078808784483
Epoch: 5/10 Loss: 3.580000238418579
Epoch: 5/10 Loss: 3.594644871711731
Epoch: 5/10 Loss: 3.692937425136566
Epoch: 5/10 Loss: 3.6423147230148314
Epoch: 5/10 Loss: 3.52877615404129
Epoch: 5/10 Loss: 3.530840669155121
Epoch: 5/10 Loss: 3.5226033205986025
Epoch: 5/10 Loss: 3.6330539150238037
Epoch: 5/10 Loss: 3.6348735995292665
Epoch: 5/10 Loss: 3.636835765361786
Epoch: 6/10 Loss: 3.5827553727902655
Epoch: 6/10 Loss: 3.5415079855918883
Epoch: 6/10 Loss: 3.495518507003784
Epoch: 6/10 Loss: 3.491495626449585
Epoch: 6/10 Loss: 3.5056679344177244
Epoch: 6/10 Loss: 3.597500946998596
Epoch: 6/10 Loss: 3.5800885028839113
Epoch: 6/10 Loss: 3.4455021510124206
Epoch: 6/10 Loss: 3.435186081409454
Epoch: 6/10 Loss: 3.434113308906555
Epoch: 6/10 Loss: 3.551355528354645
Epoch: 6/10 Loss: 3.556346004486084
Epoch: 6/10 Loss: 3.5592494101524355
Epoch: 7/10 Loss: 3.5090956062324774
Epoch: 7/10 Loss: 3.463120987415314
Epoch: 7/10 Loss: 3.426212378025055
Epoch: 7/10 Loss: 3.425258728981018
Epoch: 7/10 Loss: 3.4313915486335755
Epoch: 7/10 Loss: 3.521229241847992
Epoch: 7/10 Loss: 3.502367600440979
Epoch: 7/10 Loss: 3.375809354305267
Epoch: 7/10 Loss: 3.3604294362068177
Epoch: 7/10 Loss: 3.3639101281166077
Epoch: 7/10 Loss: 3.481114777088165
Epoch: 7/10 Loss: 3.482030040740967
Epoch: 7/10 Loss: 3.4896444087028504
Epoch: 8/10 Loss: 3.444731027626794
Epoch: 8/10 Loss: 3.4149495549201965
Epoch: 8/10 Loss: 3.3759664483070373
Epoch: 8/10 Loss: 3.368459558963776
Epoch: 8/10 Loss: 3.377001323223114
Epoch: 8/10 Loss: 3.466079110145569
Epoch: 8/10 Loss: 3.445937519073486
Epoch: 8/10 Loss: 3.320456923484802
Epoch: 8/10 Loss: 3.3011093196868897
Epoch: 8/10 Loss: 3.309420441150665
Epoch: 8/10 Loss: 3.421898567199707
Epoch: 8/10 Loss: 3.4243329553604127
Epoch: 8/10 Loss: 3.435413668632507
Epoch: 9/10 Loss: 3.3876476842017214
Epoch: 9/10 Loss: 3.3634887952804564
Epoch: 9/10 Loss: 3.330356463909149
Epoch: 9/10 Loss: 3.3167815341949463
Epoch: 9/10 Loss: 3.320630997657776
Epoch: 9/10 Loss: 3.414757354736328
Epoch: 9/10 Loss: 3.388247101306915
Epoch: 9/10 Loss: 3.26922251701355
Epoch: 9/10 Loss: 3.2560101628303526
Epoch: 9/10 Loss: 3.263214789390564
Epoch: 9/10 Loss: 3.3752480425834657
Epoch: 9/10 Loss: 3.3754741582870484
Epoch: 9/10 Loss: 3.3871750588417053
Epoch: 10/10 Loss: 3.3427198642541556
Epoch: 10/10 Loss: 3.3194530515670775
Epoch: 10/10 Loss: 3.2875109882354736
Epoch: 10/10 Loss: 3.2749180846214294
Epoch: 10/10 Loss: 3.2752520990371705
Epoch: 10/10 Loss: 3.366152672767639
Epoch: 10/10 Loss: 3.3423954834938048
Epoch: 10/10 Loss: 3.230453185081482
Epoch: 10/10 Loss: 3.2154562129974367
Epoch: 10/10 Loss: 3.2278022203445436
Epoch: 10/10 Loss: 3.3294336709976196
Epoch: 10/10 Loss: 3.3312849197387697
Epoch: 10/10 Loss: 3.340261073112488
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** - According to this paper https://arxiv.org/pdf/1506.02078.pdf > Our consistent finding is that depth of at least two is beneficial. However, between two and three layers our results are mixed.and the best performance of LSTM appeared when using 2 layers with the size of 256. - The embedding dimension was chosen by the equation according to this article: ```embedding_dimensions = number_of_categories**0.25```The vocabulary set is about 50k so the corresponding embedding size is about 15.- Sequencing size was chosen based on this article: https://medium.com/@theacropolitan/sentence-length-has-declined-75-in-the-past-500-years-2e40f80f589f. The average sentence length is now about 15. So I tested lengths like 8, 10, 12 etc. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 150 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
jerry:.
kramer: oh yeah...
george:(correcting him) you know, i can't do it!
george:(sarcastic) well, i'm sorry about it, and you were in there with the plane, the whole building, the only thing you are.
george:(standing out to the counter, and they had to be carrying it in a long line.
jerry: no. no. i'm sorry, but i got a little bit.
kramer: oh yeah, right.
[setting: jerry's apartment]
jerry: so, what are you doing?
george: well, i was going to get this book in the car, and they were in the bathroom.
george: oh.
jerry: so
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
import torch
print(torch.__version__)
# from google.colab import drive
# drive.mount('/content/drive')
# %cd /content/drive/My Drive/Colab Notebooks/deep-learning-v2-pytorch/project-tv-script-generation/
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_counts = Counter(text)
sorted_word_counts = sorted(word_counts, key=word_counts.get, reverse=True)
# create int_to_vocab dictionaries
int_to_vocab = {ii: word for ii, word in enumerate(sorted_word_counts)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
from string import punctuation
# TODO: Implement Function
dict_punc = {}
for symb in punctuation:
if symb=='.':
dict_punc[symb]="||PERIOD||"
elif symb==',':
dict_punc[symb]="||COMMA||"
elif symb=='"':
dict_punc[symb]="||QUOTATION_MARK||"
elif symb==';':
dict_punc[symb]="||SEMICOLON||"
elif symb=='!':
dict_punc[symb]="||EXCLAMATION_MARK||"
elif symb=='?':
dict_punc[symb]="||QUESTION_MARK||"
elif symb=='(':
dict_punc[symb]="||LEFT_PAREN||"
elif symb==')':
dict_punc[symb]="||RIGHT_PAREN||"
elif symb=='-':
dict_punc[symb]="||HYPHENS||"
# elif symb==':':
# dict_punc[symb]="||COLON||"
dict_punc['\n']="||NEW_LINE||"
return dict_punc
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
#!pip install --upgrade torch torchvision
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# convert list data into tensor of size (sequence_length x ...)
n_rows = len(words)//(sequence_length+1)
words = np.array(words[:n_rows*(sequence_length+1)])
# print(words.shape)
words_tensor = torch.from_numpy(words).view(n_rows,-1)
# print(words_tensor.size())
# separate last column as targets tensor, remaining is features
feature_tensors = words_tensor[:,:sequence_length]
target_tensors = words_tensor[:,-1]
print("target_tensors size:",target_tensors.size())
assert(feature_tensors.size()[0]==target_tensors.size()[0])
if train_on_gpu:
# print(feature_tensors)
feature_tensors = feature_tensors.cuda()
target_tensors = target_tensors.cuda()
# return a dataloader
data = TensorDataset(feature_tensors, target_tensors)
data_loader = DataLoader(data, batch_size=batch_size)
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# import os
# os.environ['CUDA_LAUNCH_BLOCKING'] = "1"
# test dataloader
test_text = range(50)
# print(list(range(50)))
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
target_tensors size: torch.Size([8])
torch.Size([8, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 6, 7, 8, 9, 10],
[12, 13, 14, 15, 16],
[18, 19, 20, 21, 22],
[24, 25, 26, 27, 28],
[30, 31, 32, 33, 34],
[36, 37, 38, 39, 40],
[42, 43, 44, 45, 46]], device='cuda:0')
torch.Size([8])
tensor([ 5, 11, 17, 23, 29, 35, 41, 47], device='cuda:0')
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
import torch.nn.functional as F
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.hidden_dim = hidden_dim
self.n_layers = n_layers
# define model layers
self.embed = nn.Embedding(vocab_size,embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,bias=False,batch_first=True,dropout=dropout,bidirectional=False)
self.dropout = nn.Dropout(p=0.2)
self.fc = nn.Linear(hidden_dim, vocab_size)
# self.sigmoid = nn.Sigmoid()
initrange = 0.1
if (train_on_gpu):
self.embed.weight.data.uniform_(-initrange, initrange).cuda()
# self.fc.bias.data.zero_().cuda()
# self.fc.weight.data.uniform_(-initrange, initrange).cuda()
else:
self.embed.weight.data.uniform_(-initrange, initrange)
# self.fc.bias.data.zero_()
# self.fc.weight.data.uniform_(-initrange, initrange)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size()[0]
embeds = self.embed(nn_input)
lstm_output,hidden = self.lstm(embeds,hidden)
lstm_output = self.dropout(lstm_output)
fc_input = lstm_output.contiguous().view(-1, self.hidden_dim)
# out = self.sigmoid(self.fc(fc_input))
fc_output = self.fc(fc_input)
output = fc_output.view(batch_size, -1, self.output_size)
out = output[:, -1]
return out,hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
iterator = self.parameters()
weight = next(iterator)
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move model to GPU, if available
if(train_on_gpu):
rnn.cuda()
rnn.zero_grad()
# move data to GPU, if available
if (train_on_gpu):
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
hidden = tuple([x.data for x in hidden])
out, hidden = rnn(inp, hidden)
loss = criterion(out.squeeze(), target.long())
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 100 # of words in a sequence
# Batch Size
batch_size = 100
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 20
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 400
# Hidden Dimension
hidden_dim = 512
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 50
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
import time
t = time.time()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
print("in elapsed time ={} for {} epochs, seq_length ={} hidden_dim = {}".format(time.time() - t,num_epochs,sequence_length,hidden_dim))
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 20 epoch(s)...
Epoch: 1/20 Loss: 6.911905717849732
Epoch: 2/20 Loss: 5.73502333055843
Epoch: 3/20 Loss: 5.47475217147307
Epoch: 4/20 Loss: 5.188980091701854
Epoch: 5/20 Loss: 4.900497902523387
Epoch: 6/20 Loss: 4.728184369477359
Epoch: 7/20 Loss: 4.557388040152463
Epoch: 8/20 Loss: 4.474985298785296
Epoch: 9/20 Loss: 4.357317640022798
Epoch: 10/20 Loss: 4.211959706111387
Epoch: 11/20 Loss: 4.0703448517756025
Epoch: 12/20 Loss: 3.9142714630473745
Epoch: 13/20 Loss: 3.7770327654751865
Epoch: 14/20 Loss: 3.6297601109201256
Epoch: 15/20 Loss: 3.577383217486468
Epoch: 16/20 Loss: 3.440413599664515
Epoch: 17/20 Loss: 3.268353882161054
Epoch: 18/20 Loss: 3.0990848676724867
Epoch: 19/20 Loss: 2.9485677020116285
Epoch: 20/20 Loss: 2.7713954773816196
in elapsed time =503.3225054740906 for 20 epochs, seq_length =100 hidden_dim = 512
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** Based on parameter tuning in previous exercises on RNN, I learned to keep sequence_length = 5 or 10 or 25 or 50 or 100, batch_size == 100, learning_rate = 0.001, embedding_dim = 400, hidden_dim = 128 or 256 or 512, n_layers = 2. Modifying the values within these ranges, one variable at a time, I got training loss within desired limit (around 2.77). When I varied hidden_dim, 512 gave a slow but more accurate learning. For sequence_length=10 to 50, generated script is vague. Increasing sequence length leads to need for num_epochs to be more than 10 to achieve loss < 3.5 and length of continuous predictions within same sentence increases. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
jerry:.
. yeah, i don't had the arms idea. you don't have the chocolate?
jerry:(not the executives) i'm insult it a bowl, number?
frank: hey. i'm designs?
jerry: yeah.(designs at his usa from the apartment) you got the chocolate, i don't have this sleep out to the street. i don't go you a charles will house at my hand cheek shirt from my thing to the apartment) i got the prime. you is the keys.(talking is a crest in the armoire.
jerry: i don't think i have you have the chocolate to my ticket- the hood up of a counter of the counter.
kramer: oh, i gotta see.
jerry: i know you have you a good today?
elaine: i don't see you you you this have. i know what- you is like this. i don't go. i don't see we have to take there my few environment to the street and jerry?
jerry:(o. you don't have to by this and i don't go this the street, jerry, i was by there. i'm front it the hearty marisa from there. i got it by here.
george: yeah. i guess, i'm not this like this the leotard, eyed there. but insult it?
kramer:(o. you don't see this a sound. i is the keys. you have a keys. you know you don't have to sleep?
george: yeah. you don't think we can do you a question, i don't see.
kramer: yeah i god, you know i have to take this the armoire! i got the keys. i have you like this you know, you have the keys. you know, i'm son's to his my problem for a few unit. and you don't have the chocolate?
jerry:(to smuckers in his thing!
jerry:(pulls in exporter with it who've there.
elaine: i know i don't go
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
The TV Script is Not PerfectIt's ok if the TV script doesn't make perfect sense. It should look like alternating lines of dialogue, here is one such example of a few generated lines. Example generated script>jerry: what about me?>>jerry: i don't have to wait.>>kramer:(to the sales table)>>elaine:(to jerry) hey, look at this, i'm a good doctor.>>newman:(to elaine) you think i have no idea of this...>>elaine: oh, you better take the phone, and he was a little nervous.>>kramer:(to the phone) hey, hey, jerry, i don't want to be a little bit.(to kramer and jerry) you can't.>>jerry: oh, yeah. i don't even know, i know.>>jerry:(to the phone) oh, i know.>>kramer:(laughing) you know...(to jerry) you don't know.You can see that there are multiple characters that say (somewhat) complete sentences, but it doesn't have to be perfect! It takes quite a while to get good results, and often, you'll have to use a smaller vocabulary (and discard uncommon words), or get more data. The Seinfeld dataset is about 3.4 MB, which is big enough for our purposes; for script generation you'll want more than 1 MB of text, generally. Submitting This ProjectWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_tv_script_generation.ipynb" and save another copy as an HTML file by clicking "File" -> "Download as.."->"html". Include the "helper.py" and "problem_unittests.py" files in your submission. Once you download these files, compress them into one zip file for submission.
###Code
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data.
###Code
#####################################################################################################################
###################################################### Imports ######################################################
#####################################################################################################################
import problem_unittests as tests
import helper
import numpy as np
import re
from collections import Counter
######################################################################################################################
################################################ Parameter definition ################################################
######################################################################################################################
###Output
_____no_output_____
###Markdown
Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
# import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_counts = Counter(text)
# sorting the words from most to least frequent in text occurrence
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
# create int_to_vocab dictionaries
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token = dict()
token['.']= '<PERIOD>'
token[',']= '<COMMA>'
token['"']= '<QUOTATION_MARK>'
token[';']= '<SEMICOLON>'
token['!']= '<EXCLAMATION_MARK>'
token['?']= '<QUESTION_MARK>'
token['(']= '<LEFT_PAREN>'
token[')']= '<RIGHT_PAREN>'
# token['--']= '<HYPHENS>'
token['?']= '<QUESTION_MARK>'
token['\n']= '<NEW_LINE>'
# token[':']= '<COLON>'
token['-']= '<DASH>'
return token
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
y_len = len(words) - sequence_length
x, y = [], []
for idx in range(0, y_len):
idx_end = sequence_length + idx
x_batch = words[idx:idx_end]
x.append(x_batch)
batch_y = words[idx_end]
y.append(batch_y)
# create Tensor datasets
data = TensorDataset(torch.from_numpy(np.asarray(x)), torch.from_numpy(np.asarray(y)))
# create dataloader
dataloader = DataLoader(data, shuffle=False, batch_size=batch_size)
# return a dataloader
return dataloader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]], dtype=torch.int32)
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], dtype=torch.int32)
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# embedding and LSTM layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,
dropout=dropout, batch_first=True)
# dropout layer
self.dropout = nn.Dropout(0.3)
# linear and sigmoid layers
self.fc = nn.Linear(hidden_dim, output_size)
self.sig = nn.Sigmoid()
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
# embeddings and lstm_out
nn_input = nn_input.long()
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
# out = self.dropout(lstm_out)
out = self.fc(lstm_out)
# sigmoid function
# sig_out = self.sig(out)
# reshape to be batch_size first
# sig_out = sig_out.view(batch_size, -1, self.output_size)
# sig_out = sig_out[:, -1] # get last batch of labels
out = out.view(batch_size, -1, self.output_size)
# get last batch
out = out[:, -1]
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inputs, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
target = target.type(torch.LongTensor)
# move data to GPU, if available
if(train_on_gpu):
rnn.cuda()
# Creating new variables for the hidden state, otherwise we'd backprop through the entire training history
h = tuple([each.data for each in hidden])
# zero accumulated gradients
rnn.zero_grad()
if(train_on_gpu):
inputs, target = inputs.cuda(), target.cuda()
# get predicted outputs
output, h = rnn(inputs, h)
# calculate loss
loss = criterion(output, target)
# optimizer.zero_grad()
loss.backward()
# 'clip_grad_norm' helps prevent the exploding gradient problem in RNNs / LSTMs
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), h
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 64
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 250
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 5000
print(vocab_size)
###Output
21388
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 4.712341558122635
Epoch: 1/10 Loss: 4.335313930273056
Epoch: 2/10 Loss: 4.149240888813567
Epoch: 2/10 Loss: 3.9730034842729567
Epoch: 3/10 Loss: 3.9168669078861686
Epoch: 3/10 Loss: 3.8041488509655
Epoch: 4/10 Loss: 3.7839235365190715
Epoch: 4/10 Loss: 3.6922322475910185
Epoch: 5/10 Loss: 3.6980445897400473
Epoch: 5/10 Loss: 3.619258991408348
Epoch: 6/10 Loss: 3.635652422624884
Epoch: 6/10 Loss: 3.5545290014743807
Epoch: 7/10 Loss: 3.5807655950898583
Epoch: 7/10 Loss: 3.5075816065073013
Epoch: 8/10 Loss: 3.537103382992675
Epoch: 8/10 Loss: 3.4669744132995604
Epoch: 9/10 Loss: 3.5024679281962854
Epoch: 9/10 Loss: 3.439684652209282
Epoch: 10/10 Loss: 3.470367230566643
Epoch: 10/10 Loss: 3.405293434667587
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** I used the parameters of the "Sentiment_RNN" notebook as a starting point. I just cut the embedding layers to 200, increased the learning rate to 0,003 and let the network train for 4 epochs. This just brought me close to the requested loss of 3,5. By increasing the epochs to 10 the loss was ~3,6 and by decreasing the lr to 0,001 the loss got finally to 3,4. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
jerry: cheese.
elaine: what are you doing?
morty: no.
elaine: oh, yeah!
george: i got to see you.
elaine: oh, i don't think so.
elaine: you know what? i got the one that had to get a little more more more more than you?
george: you know, i was wondering how you got to know what you think.
jerry: what?
jerry: i got to tell ya. i think it's a lot better than the only way.
jerry: oh, that's it.
jerry: oh, come on.
jerry:(confused) what is this?
kramer: yeah!
elaine: oh, i can't believe this is my way. you know i was going to be able to have a little good time, huh?
jerry: i got to tell you what you think.
jerry: i don't know how much i got to see you again.
jerry: you know the other day, you know, the only way i can get a little...
kramer: well, that's it.
kramer: oh, come on.
kramer:(to the man) hey, you got a little thing with that?
george: yeah, yeah, i guess i could do that...
elaine: i don't understand...
elaine: i don't know, i don't want any money.
elaine: you know, it's the way that i have a little more.
elaine:(confused) oh, my god!
kramer: yeah, well... i can't. i can't believe this is it, i got my mail and the new york.
kramer: well you can't get any money.
george: you think you're better than a good meal.
elaine: you know what?
elaine: oh no.
kramer:(to jerry, puzzled) yeah, well, i'm gonna need it.
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
# return tuple
#return (None, None)
word_counts = Counter(text)
# sorting the words from most to least frequent in text occurrence
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
# create int_to_vocab dictionaries
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
#return None
tokens = {'.' : '||period||',
',' : '||comma||',
'"' : '||quotation_mark||',
';' : '||semicolon||',
'!' : '||exclamation_mark||',
'?' : '||question_mark||',
'(' : '||left_parentheses||',
')' : '||right_parentheses||',
'-' : '||dash||',
'\n': '||return||'
}
return tokens
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# return a dataloader
#return None
rows = len(words) - sequence_length
feature_tensors = np.zeros((rows, sequence_length), dtype=np.int64)
target_tensors = np.zeros(rows, dtype=np.int64)
for i in range(0, rows):
feature_tensors[i] = words[i: i+sequence_length]
target_tensors[i] = words[i+sequence_length]
data = TensorDataset(torch.from_numpy(feature_tensors), torch.from_numpy(target_tensors))
data_loader = DataLoader(data, batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
words = [85, 100, 97, 99, 105, 116, 121, 46, 99, 111, 109]
data_loader = batch_data(words, 3, 3)
for i, batch in enumerate(iter(data_loader)):
print(f"batch[{i}] -> {batch}")
###Output
batch[0] -> [tensor([[ 85, 100, 97],
[ 100, 97, 99],
[ 97, 99, 105]]), tensor([ 99, 105, 116])]
batch[1] -> [tensor([[ 99, 105, 116],
[ 105, 116, 121],
[ 116, 121, 46]]), tensor([ 121, 46, 99])]
batch[2] -> [tensor([[ 121, 46, 99],
[ 46, 99, 111]]), tensor([ 111, 109])]
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]])
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.hidden_dim = hidden_dim
self.n_layers = n_layers
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# return one batch of output word scores and the hidden state
#return None, None
nn_input = nn_input.to(torch.long)
embeds = self.embedding(nn_input)
# get the output and the new hidden state from the lstm
output, hidden = self.lstm(embeds, hidden)
output = output.contiguous().view(-1, self.hidden_dim)
# add fully-connected layer
output = self.fc(output)
batch_size = nn_input.size(0)
output = output.view(batch_size, -1, self.output_size)
output = output[:, -1]
# return one batch of output word scores and the hidden state
return output, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
#return None
weights = next(self.parameters()).data
if (train_on_gpu):
hidden = (weights.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weights.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weights.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weights.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
# perform backpropagation and optimization
# return the loss over a batch and the hidden state produced by our model
#return None, None
# move data to GPU, if available
if train_on_gpu:
inp, target = inp.cuda(), target.cuda()
# create new variables for the hidden state
hidden = tuple([_.data for _ in hidden])
# perform backpropagation and optimization
rnn.zero_grad()
output, hidden = rnn(inp, hidden)
loss = criterion(output, target)
loss.backward()
# clip_grad_norm prevents exploding gradient problem in RNNs / LSTMs
nn.utils.clip_grad_norm_(rnn.parameters(), 7)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
#sequence_length = # of words in a sequence
# Batch Size
#batch_size =
sequence_length = 8
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
#num_epochs =
# Learning Rate
#learning_rate =
num_epochs = 4
learning_rate = 0.001
# Model parameters
# Vocab size
#vocab_size =
# Output size
#output_size =
# Embedding Dimension
#embedding_dim =
# Hidden Dimension
#hidden_dim =
# Number of RNN Layers
#n_layers =
vocab_size = len(int_to_vocab)
output_size = vocab_size
embedding_dim = 128
hidden_dim = 512
n_layers = 1
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/rnn.py:38: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.5 and num_layers=1
"num_layers={}".format(dropout, num_layers))
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here)My strategy was to select just enough hyperparameters to be able minimize training duration to the minimum.During initial tests I decided that at the minimum training must last 4 epochs to reach Loss below 3.5.The trainig rate was set to 0.001. Then I started to tweak other parameters to minimize computationalload. I experimented with batch size 64 and 128 but only 128 enabled the algorithm to learn for 4 epochs. I experimented with sequence size of 16 and was able to lower it to 8. I did similar experiment with embedding dimention and was able to lower it down to 128. Hidden size cannot be lower than 512. All above exercises I executed with number of layers set to 2. Later I have found out that setting it to 1 would further decrease the training time to 3 epochs. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:45: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
import re
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_counts = Counter(text)
# sorting the words from most to least frequent in text occurrence
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
# create int_to_vocab dictionaries
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token_table = dict()
token_table['.'] = "<PERIOD>"
token_table[','] = "<COMMA>"
token_table['"'] = "<QUOTATION_MARK>"
token_table[';'] = "<SEMICOLON>"
token_table['!'] = "<EXCLAMATION_MARK>"
token_table['?'] = "<QUESTION_MARK>"
token_table['('] = "<LEFT_PAREN>"
token_table[')'] = "<RIGHT_PAREN>"
token_table['-'] = "<DASH>"
token_table['\n'] = "<NEW_LINE>"
return token_table
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
feature_tensors, target_tensors = [], []
for idx in range(len(words) - sequence_length):
feature_tensors.append(words[idx: idx + sequence_length])
target_tensors.append(words[idx + sequence_length])
feature_tensors, target_tensors = torch.Tensor(feature_tensors), torch.Tensor(target_tensors)
data = TensorDataset(feature_tensors, target_tensors)
# return a dataloader
return DataLoader(data, batch_size=batch_size, shuffle=True)
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[40., 41., 42., 43., 44.],
[19., 20., 21., 22., 23.],
[14., 15., 16., 17., 18.],
[16., 17., 18., 19., 20.],
[ 0., 1., 2., 3., 4.],
[23., 24., 25., 26., 27.],
[18., 19., 20., 21., 22.],
[12., 13., 14., 15., 16.],
[27., 28., 29., 30., 31.],
[34., 35., 36., 37., 38.]])
torch.Size([10])
tensor([45., 24., 19., 21., 5., 28., 23., 17., 32., 39.])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.vocab_size = vocab_size
self.output_size = output_size
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.n_layers = n_layers
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,
dropout=dropout, batch_first=True)
# self.dropout = nn.Dropout(p=0.3)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
emb = self.embedding(nn_input.long())
out, hidden = self.lstm(emb, hidden)
# out = self.dropout(out)
out = out.contiguous().view(-1, self.hidden_dim)
out = self.fc(out)
out = out.view(nn_input.size(0), -1, self.output_size)
out = out[:, -1]
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# Avoid backprop through entire hidden history
hidden = tuple([each.data for each in hidden])
# move data to GPU, if available
if train_on_gpu:
inp, target = inp.cuda(), target.cuda()
rnn.zero_grad()
output, h = rnn(inp, hidden)
# perform backpropagation and optimization
loss = criterion(output, target.long())
loss.backward()
# Prevent exploding gradient with clipping
# nn.utils.clip_grad_norm_(rnn.parameters(), clip)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), h
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
!nvidia-smi
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from fastprogress import master_bar, progress_bar
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
mb = master_bar(range(1, n_epochs + 1))
for epoch_i in mb:
# for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
n_batches = len(train_loader.dataset)//batch_size
for idx in progress_bar(range(n_batches), parent=mb):
(inputs, labels) = iter(train_loader).next()
batch_i = idx + 1
#for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
####
mb.child.comment = f'Running loss {np.average(batch_losses)}'
mb.first_bar.comment = f'Final loss {np.average(batch_losses)}'
mb.write(f'Finished loop {epoch_i} - Loss {np.average(batch_losses)}.')
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 20 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 1e-3
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200 # 400
# Hidden Dimension
hidden_dim = 512 # 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = len(train_loader.dataset) // (2 * batch_size)
# 5 epochs : 3.4856
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here)I started rather simply with smaller batch size than my final iteration. Initially I had an additional dropout before the final FC which turned out to be unnecessary and slowing down the training. In regards to the sequence lengths, I noticed quite a difference in the coherence of generated scripts by using longer sequence lengths. Shorter ones gave faster convergence regarding grammar but longer sequences provide better general sense.I reduced the embedding dimension and increased the hidden dimension to carry coherence longer in the network. My biggest mistake in the beginning was not detaching correctly the hidden state which brought about a memory leak I spent time on. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
# Personal edit: ensure inference is performed on the same device as the input data (CPU)
##########
rnn.cpu()
current_seq = current_seq.cpu()
hidden = hidden[0].cpu(), hidden[1].cpu()
###########
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
jerry: creaking hats, and rub spaghetti at the beep. '
jerry: yeah!
george:(urgent) oh god.
george:(to jerry) i don't know.
jerry: what do you mean, maybe you should get the job?
george: i don't know.
jerry: i thought it was an accident.
kramer: oh, well, that's nice.(indicates 'gene' kruger's) well, you see, i've got to have a call for a second, but i don't have to be a vegetable. i don't know how the bubble boy did that.
george:(worried) you know, it's not like this... it's a doodle.
elaine: you see, i think she might be, he doesn't know if you can spare me somewhere, i have to say it.
kramer: hey hey hey. hey, hey. hey. hey.
george:(to kramer) i was a 718 biologist!
jerry: what?
george: i don't know, you don't know, it's a joke of a man's visit.
elaine: i think you're not getting any sleep?
elaine: no, no, you said i was dumb. i really have a little reason to see you.
kramer: hey, hey.
jerry: hi, how ya doing?(kramer hits the button to the door.)
jerry:(to george) so, you know, if you think i could call the police, i don't want to get the extension clear.
elaine: i thought you hated sweatpants thunder, you're yella.
jerry: well, you know, it's just not fair.(to kramer) hey. hey, i got the doll.
jerry: i know. it's all right.
george: what is it?
jerry: you know, i really enjoy it. i don't know where she looks like
george: you see? you're going to the hospital, newman. i can't believe it.
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
_____no_output_____
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
# return tuple
return (None, None)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
_____no_output_____
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
_____no_output_____
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# return a dataloader
return None
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
_____no_output_____
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
# define model layers
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# return one batch of output word scores and the hidden state
return None, None
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
_____no_output_____
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
# perform backpropagation and optimization
# return the loss over a batch and the hidden state produced by our model
return None, None
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
_____no_output_____
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = # of words in a sequence
# Batch Size
batch_size =
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs =
# Learning Rate
learning_rate =
# Model parameters
# Vocab size
vocab_size =
# Output size
output_size =
# Embedding Dimension
embedding_dim =
# Hidden Dimension
hidden_dim =
# Number of RNN Layers
n_layers =
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
_____no_output_____
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
_____no_output_____
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_freq = Counter(text)
sorted_words = sorted(word_freq, key = word_freq.get, reverse=True)
int_to_vocab = {i: each_word for i, each_word in enumerate(sorted_words)}
vocab_to_int = {v: k for k, v in int_to_vocab.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
punc_to_token_dict = {
".":"<PERIOD>",
",": "<COMMA>",
"\"": "<QUOTATION_MARK>",
";": "<SEMICOLON>",
"!": "<EXCLAMATION_MARK>",
"?": "<QUESTION_MARK>",
"(": "<LEFT_PARENTHESES>",
")": "<RIGHT_PARENTHESES>",
"-": "<DASH>",
"\n": "<RETURN>"}
return punc_to_token_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# get number of batches
n_batches = len(words)//batch_size
# get words
words = words[:n_batches*batch_size]
target_length = len(words) - sequence_length
feature_tensor = []
target_tensor = []
for i in range(target_length):
feature_batch = words[i:i+sequence_length]
target_batch = words[i+sequence_length]
feature_tensor.append(feature_batch)
target_tensor.append(target_batch)
feature_tensor = torch.from_numpy(np.asarray(feature_tensor))
target_tensor = torch.from_numpy(np.asarray(target_tensor))
data = TensorDataset(feature_tensor, target_tensor)
data_loader = torch.utils.data.DataLoader(data,
shuffle=True,
batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[24, 25, 26, 27, 28],
[25, 26, 27, 28, 29],
[ 0, 1, 2, 3, 4],
[12, 13, 14, 15, 16],
[ 9, 10, 11, 12, 13],
[ 7, 8, 9, 10, 11],
[23, 24, 25, 26, 27],
[11, 12, 13, 14, 15],
[ 8, 9, 10, 11, 12],
[28, 29, 30, 31, 32]])
torch.Size([10])
tensor([29, 30, 5, 17, 14, 12, 28, 16, 13, 33])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.hidden_dim = hidden_dim
self.n_layers = n_layers
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim,
hidden_dim,
n_layers,
dropout = dropout,
batch_first=True)
self.dropout = nn.Dropout(dropout)
self.fc = nn.Linear(self.hidden_dim, self.output_size)
self.sig = nn.Sigmoid()
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# get batch size
batch_size = nn_input.size(0)
# get the embedding layers
embedding_output = self.embedding(nn_input)
lstm_output, hidden = self.lstm(embedding_output, hidden)
lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)
# output = self.dropout(lstm_output)
# output = self.fc(output)
output = self.fc(lstm_output)
output = output.view(batch_size, -1, self.output_size)
output = output[:, -1]
# return one batch of output word scores and the hidden state
return output, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weights = next(self.parameters()).data
if train_on_gpu:
hidden = (
weights.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weights.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda()
)
else:
hidden = (
weights.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weights.new(self.n_layers, batch_size, self.hidden_dim).zero_()
)
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if train_on_gpu:
inp = inp.cuda()
target = target.cuda()
hidden = tuple([each.data for each in hidden])
# replace gradient instead of accumulation
rnn.zero_grad()
# get the output
output, hidden = rnn(inp, hidden)
# perform backpropagation and optimization
loss = criterion(output, target)
loss.backward()
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 50 # of words in a sequence
# Batch Size
batch_size = 256
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 5
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
#print(vocab_size, output_size)
# Embedding Dimension
embedding_dim = 150
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 100
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 5 epoch(s)...
Epoch: 1/5 Loss: 6.4068379306793215
Epoch: 1/5 Loss: 5.80017605304718
Epoch: 1/5 Loss: 5.524508976936341
Epoch: 1/5 Loss: 5.270959777832031
Epoch: 1/5 Loss: 5.056707382202148
Epoch: 1/5 Loss: 5.002842583656311
Epoch: 1/5 Loss: 4.893531923294067
Epoch: 1/5 Loss: 4.767762522697449
Epoch: 1/5 Loss: 4.7649554967880245
Epoch: 1/5 Loss: 4.656662092208863
Epoch: 1/5 Loss: 4.647053670883179
Epoch: 1/5 Loss: 4.590586309432983
Epoch: 1/5 Loss: 4.534190645217896
Epoch: 1/5 Loss: 4.50496241569519
Epoch: 1/5 Loss: 4.462353477478027
Epoch: 1/5 Loss: 4.451501240730286
Epoch: 1/5 Loss: 4.473024802207947
Epoch: 1/5 Loss: 4.4302429723739625
Epoch: 1/5 Loss: 4.401646440029144
Epoch: 1/5 Loss: 4.396014726161956
Epoch: 1/5 Loss: 4.389751906394959
Epoch: 1/5 Loss: 4.378966391086578
Epoch: 1/5 Loss: 4.371344213485718
Epoch: 1/5 Loss: 4.322054214477539
Epoch: 1/5 Loss: 4.327344102859497
Epoch: 1/5 Loss: 4.303751721382141
Epoch: 1/5 Loss: 4.286612455844879
Epoch: 1/5 Loss: 4.25340856552124
Epoch: 1/5 Loss: 4.260677220821381
Epoch: 1/5 Loss: 4.231102390289307
Epoch: 1/5 Loss: 4.255189394950866
Epoch: 1/5 Loss: 4.239887957572937
Epoch: 1/5 Loss: 4.268645915985108
Epoch: 1/5 Loss: 4.241557712554932
Epoch: 2/5 Loss: 4.159387060853302
Epoch: 2/5 Loss: 4.100561435222626
Epoch: 2/5 Loss: 4.076433598995209
Epoch: 2/5 Loss: 4.072349045276642
Epoch: 2/5 Loss: 4.069418666362762
Epoch: 2/5 Loss: 4.06257465839386
Epoch: 2/5 Loss: 4.058723542690277
Epoch: 2/5 Loss: 4.058817756175995
Epoch: 2/5 Loss: 4.021687088012695
Epoch: 2/5 Loss: 4.079593675136566
Epoch: 2/5 Loss: 4.06189908504486
Epoch: 2/5 Loss: 4.015020637512207
Epoch: 2/5 Loss: 4.001477456092834
Epoch: 2/5 Loss: 4.017015645503998
Epoch: 2/5 Loss: 4.027090125083923
Epoch: 2/5 Loss: 4.022747764587402
Epoch: 2/5 Loss: 4.056444859504699
Epoch: 2/5 Loss: 3.959230074882507
Epoch: 2/5 Loss: 3.991372244358063
Epoch: 2/5 Loss: 4.023463635444641
Epoch: 2/5 Loss: 4.013900892734528
Epoch: 2/5 Loss: 3.9712702584266664
Epoch: 2/5 Loss: 3.9888929438591005
Epoch: 2/5 Loss: 3.9944417691230774
Epoch: 2/5 Loss: 3.9805916333198548
Epoch: 2/5 Loss: 3.9727710771560667
Epoch: 2/5 Loss: 3.953549497127533
Epoch: 2/5 Loss: 3.971550006866455
Epoch: 2/5 Loss: 3.95867192029953
Epoch: 2/5 Loss: 3.992890188694
Epoch: 2/5 Loss: 3.916501529216766
Epoch: 2/5 Loss: 3.9801615619659425
Epoch: 2/5 Loss: 3.9616604328155516
Epoch: 2/5 Loss: 3.972940526008606
Epoch: 3/5 Loss: 3.8893500468770013
Epoch: 3/5 Loss: 3.8272493290901184
Epoch: 3/5 Loss: 3.809918806552887
Epoch: 3/5 Loss: 3.8243420624732973
Epoch: 3/5 Loss: 3.8020728254318237
Epoch: 3/5 Loss: 3.7898845744132994
Epoch: 3/5 Loss: 3.8551889729499815
Epoch: 3/5 Loss: 3.8339338660240174
Epoch: 3/5 Loss: 3.828848407268524
Epoch: 3/5 Loss: 3.817383871078491
Epoch: 3/5 Loss: 3.8198004817962645
Epoch: 3/5 Loss: 3.823597071170807
Epoch: 3/5 Loss: 3.8172457337379457
Epoch: 3/5 Loss: 3.826956343650818
Epoch: 3/5 Loss: 3.843570771217346
Epoch: 3/5 Loss: 3.8241956758499147
Epoch: 3/5 Loss: 3.788174576759338
Epoch: 3/5 Loss: 3.802480704784393
Epoch: 3/5 Loss: 3.802792975902557
Epoch: 3/5 Loss: 3.8059629797935486
Epoch: 3/5 Loss: 3.822601993083954
Epoch: 3/5 Loss: 3.8122875332832336
Epoch: 3/5 Loss: 3.792563076019287
Epoch: 3/5 Loss: 3.799284896850586
Epoch: 3/5 Loss: 3.7591514730453492
Epoch: 3/5 Loss: 3.8213213682174683
Epoch: 3/5 Loss: 3.7904862189292907
Epoch: 3/5 Loss: 3.7652175307273863
Epoch: 3/5 Loss: 3.775081684589386
Epoch: 3/5 Loss: 3.805647120475769
Epoch: 3/5 Loss: 3.8115931391716003
Epoch: 3/5 Loss: 3.7801232147216797
Epoch: 3/5 Loss: 3.7809012794494627
Epoch: 3/5 Loss: 3.7960203099250793
Epoch: 4/5 Loss: 3.717857441615537
Epoch: 4/5 Loss: 3.6277985835075377
Epoch: 4/5 Loss: 3.6847699308395385
Epoch: 4/5 Loss: 3.7003368139266968
Epoch: 4/5 Loss: 3.6774175333976746
Epoch: 4/5 Loss: 3.669505636692047
Epoch: 4/5 Loss: 3.6503692483901977
Epoch: 4/5 Loss: 3.6640839791297912
Epoch: 4/5 Loss: 3.648356976509094
Epoch: 4/5 Loss: 3.6605164074897765
Epoch: 4/5 Loss: 3.640787110328674
Epoch: 4/5 Loss: 3.674849519729614
Epoch: 4/5 Loss: 3.6508079314231874
Epoch: 4/5 Loss: 3.636616377830505
Epoch: 4/5 Loss: 3.6928508830070497
Epoch: 4/5 Loss: 3.710030746459961
Epoch: 4/5 Loss: 3.671573815345764
Epoch: 4/5 Loss: 3.676208462715149
Epoch: 4/5 Loss: 3.695106589794159
Epoch: 4/5 Loss: 3.6542873120307924
Epoch: 4/5 Loss: 3.6725457859039308
Epoch: 4/5 Loss: 3.676883533000946
Epoch: 4/5 Loss: 3.641233990192413
Epoch: 4/5 Loss: 3.6844634914398195
Epoch: 4/5 Loss: 3.684463300704956
Epoch: 4/5 Loss: 3.6984883618354796
Epoch: 4/5 Loss: 3.6826293683052063
Epoch: 4/5 Loss: 3.658426206111908
Epoch: 4/5 Loss: 3.6646661019325255
Epoch: 4/5 Loss: 3.6891902613639833
Epoch: 4/5 Loss: 3.6704711723327637
Epoch: 4/5 Loss: 3.652671253681183
Epoch: 4/5 Loss: 3.6616198945045473
Epoch: 4/5 Loss: 3.6719136381149293
Epoch: 5/5 Loss: 3.6039040531617044
Epoch: 5/5 Loss: 3.5313985776901244
Epoch: 5/5 Loss: 3.5652358174324035
Epoch: 5/5 Loss: 3.5692408585548403
Epoch: 5/5 Loss: 3.5740801239013673
Epoch: 5/5 Loss: 3.53299649477005
Epoch: 5/5 Loss: 3.582785394191742
Epoch: 5/5 Loss: 3.562717342376709
Epoch: 5/5 Loss: 3.5709479212760926
Epoch: 5/5 Loss: 3.5633581590652468
Epoch: 5/5 Loss: 3.5897817158699037
Epoch: 5/5 Loss: 3.5493237590789795
Epoch: 5/5 Loss: 3.5699755334854126
Epoch: 5/5 Loss: 3.5801324033737183
Epoch: 5/5 Loss: 3.5854428577423096
Epoch: 5/5 Loss: 3.5198756718635558
Epoch: 5/5 Loss: 3.5358573722839357
Epoch: 5/5 Loss: 3.5455330348014833
Epoch: 5/5 Loss: 3.550049612522125
Epoch: 5/5 Loss: 3.550722641944885
Epoch: 5/5 Loss: 3.585439095497131
Epoch: 5/5 Loss: 3.557195086479187
Epoch: 5/5 Loss: 3.549436020851135
Epoch: 5/5 Loss: 3.5957862186431884
Epoch: 5/5 Loss: 3.553499011993408
Epoch: 5/5 Loss: 3.57416659116745
Epoch: 5/5 Loss: 3.5646439743041993
Epoch: 5/5 Loss: 3.596697154045105
Epoch: 5/5 Loss: 3.563871457576752
Epoch: 5/5 Loss: 3.5278802132606506
Epoch: 5/5 Loss: 3.538695294857025
Epoch: 5/5 Loss: 3.5814749240875243
Epoch: 5/5 Loss: 3.589377660751343
Epoch: 5/5 Loss: 3.567865424156189
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here)I trained the network on my local GPU machine. I tried a batch size of 64 and learning rate of 0.003. However, this didn't converge quickly. It was stuck in local minima (and couldn't reach less than 3.5). I tried a larger batch_size (bumped up my batch size since I figured my GPU utilization was low) and slower learning rate. It quickly reached less than 3.5/3.6 within 5-6 epochs. Then after 15 epochs, reached 3.2 (less than 3.5). This was trained with sequence length of 25.I wanted to experiment with larger sequence size. So, I trained the network with a sequence length of 50, batch size of 256 and number of epochs (set to 10 this time). It also converged fairly quickly (to less than 3.5) in 10 epochs.**However, I am unable to properly distinguish between these 2 models that I have trained. Not sure how to check which model is better?****Does one look at the validation loss only to determine the best model?** --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 100 # modify the length to your preference
prime_word = 'elaine' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
elaine: saturate disorient tolerate saturate tolerate saturate disorient tolerate 'yeah............ i think i could have to get a job.
jerry: oh, yeah, yeah, i'm sorry..
jerry: i don't think so, i don't think so.
morty:(on intercom) what do you think?
jerry: well, i think i had to talk to her.
jerry: i know.
jerry: what are we talking about?
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_3.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
counts = Counter(text)
vocab = sorted(counts, key=counts.get, reverse=True)
int_to_vocab = {ii: word for ii, word in enumerate(vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token_dict = {"." : "||Period||",
"," : "||Comma||",
'"' : "||Quotation_Mark||",
";" : "||Semicolon||",
"!" : "||Exclamation_mark||",
"?" : "||Question_mark||",
"(" : "||Left_Parentheses||",
")" : "||Right_Parentheses||",
"-" : "||Dash||",
"\n" : "||Return||"}
return token_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
feature_tensors = []
target_tensors = []
for i in range(len(words) - sequence_length):
feature_tensors.append(words[i : i + sequence_length])
target_tensors.append(words[i + sequence_length])
data = TensorDataset(torch.from_numpy(np.asarray(feature_tensors)), torch.from_numpy(np.asarray(target_tensors)))
dataLoader = DataLoader(data, shuffle=True, batch_size=batch_size)
# return a dataloader
return dataLoader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 36, 37, 38, 39, 40],
[ 14, 15, 16, 17, 18],
[ 44, 45, 46, 47, 48],
[ 16, 17, 18, 19, 20],
[ 21, 22, 23, 24, 25],
[ 26, 27, 28, 29, 30],
[ 2, 3, 4, 5, 6],
[ 10, 11, 12, 13, 14],
[ 41, 42, 43, 44, 45],
[ 20, 21, 22, 23, 24]])
torch.Size([10])
tensor([ 41, 19, 49, 21, 26, 31, 7, 15, 46, 25])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.hidden_dim = hidden_dim
self.n_layers = n_layers
# define model layers
# embedding and LSTM layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
# dropout later
# self.dropout = nn.Dropout(0.25)
# linear and sigmoid layers
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
# embeddings and lstm_out
embeds = self.embedding(nn_input)
lstm_output, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
# output = self.dropout(lstm_output)
output = self.fc(lstm_output)
# reshape into (batch_size, seq_length, output_size)
output = output.view(batch_size, -1, self.output_size)
# get last batch of labels
output = output[:, -1]
# return one batch of output word scores and the hidden state
return output, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if train_on_gpu:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if train_on_gpu:
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
hidden = tuple([each.data for each in hidden])
# zero accumulated gradients
rnn.zero_grad()
# get the output from the model
output, hidden = rnn(inp, hidden)
# calculate the loss and perform backprop
loss = criterion(output, target)
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(rnn.parameters(), max_norm=5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 7 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 250
# Hidden Dimension
hidden_dim = 250
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 3000
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 4.726041041612625
Epoch: 1/10 Loss: 4.2466800963878635
Epoch: 2/10 Loss: 4.013005056568692
Epoch: 2/10 Loss: 3.9271443860530852
Epoch: 3/10 Loss: 3.7985737514832327
Epoch: 3/10 Loss: 3.765283320824305
Epoch: 4/10 Loss: 3.6726168634430056
Epoch: 4/10 Loss: 3.668744683980942
Epoch: 5/10 Loss: 3.5784635943991523
Epoch: 5/10 Loss: 3.5919457669258117
Epoch: 6/10 Loss: 3.51369749153814
Epoch: 6/10 Loss: 3.5273837795257568
Epoch: 7/10 Loss: 3.4528840551933935
Epoch: 7/10 Loss: 3.4793679432868956
Epoch: 8/10 Loss: 3.413141442462802
Epoch: 8/10 Loss: 3.433897761265437
Epoch: 9/10 Loss: 3.371523099260465
Epoch: 9/10 Loss: 3.401685496966044
Epoch: 10/10 Loss: 3.3346742933555955
Epoch: 10/10 Loss: 3.3665654594103493
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** | Trial | sequence_length | batch_size | embedding_dim | hidden_dim | Loss | dropout at fc layer ||:-----:|:---------------:|:----------:|:-------------:|:----------:|:------------------------------:|:-------------------:|| 1st | 50 | 16 | 300 | 256 | 4.688908943653106 (1 epoch) | 0.25 || 2nd | 25 | 32 | 200 | 200 | 4.157939701477686 (5 epochs) | 0.25 || 3rd | 10 | 32 | 200 | 200 | 4.169152552286784 (4 epochs) | 0.25 || 4th | 7 | 32 | 250 | 250 | 3.3665654594103493 (10 epochs) | 0 |At first, I used learning_rate = 0.01 and it couldn't converge, so I decreased the learning rate to 0.001.Then, I tried a few runs and above are the results. In the first run, I used a long sequence length (50) and a high embedding dimension (300), and it took forever to converge. So, I stopped it after the first epoch.Then, I reduced the sequence length by half (to 25) and embedding dimension to 200, allowing me to increase the batch size without running out of memory. It trained a bit faster, but the loss still stayed at around 4.15 after 5 epochs. Next, I continued reducing the sequence length but it didn't do better. Then, I asked people on Slack, and some said that I should not do dropout before **the fc layer**, so I removed the dropout. I also realized that the sequence length can be reduced to 7 because we found out in the beginning that the average number of words in each line is 5.5, which makes sense because this is a TV script. As a result, the sequence length of 7 sounds more reasonable. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:45: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_counter = Counter(text)
sorted_vocab_list = sorted(word_counter, key=word_counter.get, reverse=True)
vocab_to_int = {word: i for i, word in enumerate(sorted_vocab_list)} #Do not need to start from index 1 because no padding.
int_to_vocab = {i: word for word, i in vocab_to_int.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token_dict = {
'.': "||dot||",
',': "||comma||",
'"': "||doublequote||",
';': "||semicolon||",
'!': "||bang||",
'?': "||questionmark||",
'(': "||leftparens||",
')': "||rightparens||",
'-': "||dash||",
'\n': "||return||",
}
return token_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
features = []
targets = []
print(words, sequence_length, batch_size)
for start in range(len(words) - sequence_length):
end = start + sequence_length
features.append(words[start:end])
targets.append(words[end])
data = TensorDataset(torch.tensor(features), torch.tensor(targets))
data_loader = DataLoader(data, batch_size, True)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
range(0, 50) 5 10
torch.Size([10, 5])
tensor([[ 18, 19, 20, 21, 22],
[ 42, 43, 44, 45, 46],
[ 41, 42, 43, 44, 45],
[ 1, 2, 3, 4, 5],
[ 15, 16, 17, 18, 19],
[ 32, 33, 34, 35, 36],
[ 26, 27, 28, 29, 30],
[ 30, 31, 32, 33, 34],
[ 39, 40, 41, 42, 43],
[ 4, 5, 6, 7, 8]])
torch.Size([10])
tensor([ 23, 47, 46, 6, 20, 37, 31, 35, 44, 9])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define model layers
self.embed = nn.Embedding(vocab_size, embedding_dim)
self.rnn = nn.LSTM(embedding_dim, self.hidden_dim, self.n_layers, dropout=dropout, batch_first=True)
self.fc = nn.Linear(self.hidden_dim, self.output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
x = self.embed(nn_input)
x, hidden = self.rnn(x, hidden)
x = x.contiguous().view(-1, self.hidden_dim)
x = self.fc(x)
x = x.view(nn_input.size(0), -1, self.output_size)[:, -1]
# return one batch of output word scores and the hidden state
return x, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
if train_on_gpu:
hidden = (hidden[0].cuda(), hidden[1].cuda())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if train_on_gpu:
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
hidden = tuple([each.data for each in hidden])
optimizer.zero_grad()
rnn.zero_grad()
output, hidden = rnn(inp, hidden)
loss = criterion(output, target)
loss.backward()
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 8 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 9
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 256
# Hidden Dimension
hidden_dim = 512
# Number of RNN Layers
n_layers = 3
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 9 epoch(s)...
Epoch: 1/9 Loss: 5.9196070919036865
Epoch: 1/9 Loss: 5.154569608688354
Epoch: 1/9 Loss: 4.861867110252381
Epoch: 1/9 Loss: 4.663630374908447
Epoch: 1/9 Loss: 4.568453297615052
Epoch: 1/9 Loss: 4.501800373077392
Epoch: 1/9 Loss: 4.4400984477996825
Epoch: 1/9 Loss: 4.407389422893524
Epoch: 1/9 Loss: 4.359781648635864
Epoch: 1/9 Loss: 4.311809137821197
Epoch: 1/9 Loss: 4.285976921081543
Epoch: 1/9 Loss: 4.265631782531738
Epoch: 1/9 Loss: 4.228047152042389
Epoch: 2/9 Loss: 4.1530766147578095
Epoch: 2/9 Loss: 4.056858470439911
Epoch: 2/9 Loss: 4.049036991596222
Epoch: 2/9 Loss: 4.028184664726258
Epoch: 2/9 Loss: 4.027896447658539
Epoch: 2/9 Loss: 3.99689031124115
Epoch: 2/9 Loss: 3.996437876701355
Epoch: 2/9 Loss: 3.978628529548645
Epoch: 2/9 Loss: 4.00464665555954
Epoch: 2/9 Loss: 3.986073437690735
Epoch: 2/9 Loss: 3.9736593861579896
Epoch: 2/9 Loss: 3.9845535202026365
Epoch: 2/9 Loss: 3.9533566370010376
Epoch: 3/9 Loss: 3.887317371565245
Epoch: 3/9 Loss: 3.7951194486618043
Epoch: 3/9 Loss: 3.7917876377105713
Epoch: 3/9 Loss: 3.7811633620262146
Epoch: 3/9 Loss: 3.7886141839027405
Epoch: 3/9 Loss: 3.8130338320732116
Epoch: 3/9 Loss: 3.8106535000801087
Epoch: 3/9 Loss: 3.8175085015296935
Epoch: 3/9 Loss: 3.784125514984131
Epoch: 3/9 Loss: 3.797453468799591
Epoch: 3/9 Loss: 3.80401885843277
Epoch: 3/9 Loss: 3.808668386936188
Epoch: 3/9 Loss: 3.8207490234375
Epoch: 4/9 Loss: 3.7332648527265455
Epoch: 4/9 Loss: 3.634948308467865
Epoch: 4/9 Loss: 3.647617848396301
Epoch: 4/9 Loss: 3.6423205795288087
Epoch: 4/9 Loss: 3.64229798412323
Epoch: 4/9 Loss: 3.647725365638733
Epoch: 4/9 Loss: 3.6609289746284484
Epoch: 4/9 Loss: 3.6745360856056215
Epoch: 4/9 Loss: 3.6791053624153136
Epoch: 4/9 Loss: 3.6619578566551207
Epoch: 4/9 Loss: 3.6766717824935915
Epoch: 4/9 Loss: 3.678693187713623
Epoch: 4/9 Loss: 3.694414616584778
Epoch: 5/9 Loss: 3.5895329953716266
Epoch: 5/9 Loss: 3.5111766138076783
Epoch: 5/9 Loss: 3.5191890153884886
Epoch: 5/9 Loss: 3.5270044150352478
Epoch: 5/9 Loss: 3.5389353365898133
Epoch: 5/9 Loss: 3.546943061828613
Epoch: 5/9 Loss: 3.5659065365791323
Epoch: 5/9 Loss: 3.5496874227523803
Epoch: 5/9 Loss: 3.5708318347930907
Epoch: 5/9 Loss: 3.5583040256500245
Epoch: 5/9 Loss: 3.572103688716888
Epoch: 5/9 Loss: 3.583995021343231
Epoch: 5/9 Loss: 3.5945741410255434
Epoch: 6/9 Loss: 3.508891934580847
Epoch: 6/9 Loss: 3.4223804202079773
Epoch: 6/9 Loss: 3.4277972531318666
Epoch: 6/9 Loss: 3.4104116830825806
Epoch: 6/9 Loss: 3.455143273830414
Epoch: 6/9 Loss: 3.4551423745155336
Epoch: 6/9 Loss: 3.446984980583191
Epoch: 6/9 Loss: 3.4660440020561216
Epoch: 6/9 Loss: 3.490551306247711
Epoch: 6/9 Loss: 3.4813959879875185
Epoch: 6/9 Loss: 3.5088824620246886
Epoch: 6/9 Loss: 3.50870436668396
Epoch: 6/9 Loss: 3.512404335975647
Epoch: 7/9 Loss: 3.42333300760779
Epoch: 7/9 Loss: 3.3405881376266477
Epoch: 7/9 Loss: 3.349756766796112
Epoch: 7/9 Loss: 3.3535381975173952
Epoch: 7/9 Loss: 3.3849702200889586
Epoch: 7/9 Loss: 3.3748793692588808
Epoch: 7/9 Loss: 3.400786780834198
Epoch: 7/9 Loss: 3.4115707964897157
Epoch: 7/9 Loss: 3.4002523488998415
Epoch: 7/9 Loss: 3.4308757686614992
Epoch: 7/9 Loss: 3.403478935718536
Epoch: 7/9 Loss: 3.4190732889175415
Epoch: 7/9 Loss: 3.436656415462494
Epoch: 8/9 Loss: 3.3615086432950045
Epoch: 8/9 Loss: 3.2641971321105956
Epoch: 8/9 Loss: 3.283445837497711
Epoch: 8/9 Loss: 3.2901403641700746
Epoch: 8/9 Loss: 3.3264351720809935
Epoch: 8/9 Loss: 3.3202496209144594
Epoch: 8/9 Loss: 3.3229446692466738
Epoch: 8/9 Loss: 3.3508189005851747
Epoch: 8/9 Loss: 3.3521045947074892
Epoch: 8/9 Loss: 3.346731041908264
Epoch: 8/9 Loss: 3.3734106063842773
Epoch: 8/9 Loss: 3.379966462612152
Epoch: 8/9 Loss: 3.38812366771698
Epoch: 9/9 Loss: 3.2891932857540986
Epoch: 9/9 Loss: 3.213767638206482
Epoch: 9/9 Loss: 3.239010145187378
Epoch: 9/9 Loss: 3.2513397607803345
Epoch: 9/9 Loss: 3.258533630371094
Epoch: 9/9 Loss: 3.2619752712249754
Epoch: 9/9 Loss: 3.275171920776367
Epoch: 9/9 Loss: 3.2787757329940797
Epoch: 9/9 Loss: 3.28660843372345
Epoch: 9/9 Loss: 3.293291466712952
Epoch: 9/9 Loss: 3.327093819618225
Epoch: 9/9 Loss: 3.320905412197113
Epoch: 9/9 Loss: 3.3482332491874693
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** Most of the params were selected based on community input gathered from online sources. Sequence length was a little special in that I could not find many suggestions online, I tested, 4, 6, 8, 16, 32, 64, 128, and 1024 length sequences. I found that smaller sequences where effective, but I am not conclusive. 8 achieved the best results in a fairly short time.I also tested other params like hidden dims and layers, etc. The conclusion was that higher embedding dims did not improve performance, while higher hidden dims did, 2-3 layers seems to offer little difference in performance. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
import numpy as np
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:35: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
counts = Counter(text)
vocab = sorted(counts, key=counts.get, reverse=True)
int_to_vocab = dict(enumerate(vocab))
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return {".": "||Period||",
",": "||Comma||",
'"': "||Quoteation_Mark||",
";": "||Semicolon||",
"?": "||Question_Mark||",
"-": "||Dash||",
"!": "||Exclamation_Mark||",
"(": "||Left_Parenthesis||",
")": "||Right_Parenthesis||",
"\n": "||Return||"}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
words = np.array(words)
num_of_sequences = len(words) - sequence_length
# Obtain window indexes
indexer = np.arange(num_of_sequences)[:, None] + np.arange(sequence_length)[None, :]
# Get features array
features = words[indexer]
targets = words[sequence_length:]
dataset = TensorDataset(torch.from_numpy(features), torch.from_numpy(targets))
# return a dataloader
return DataLoader(dataset, shuffle=True, batch_size=batch_size)
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
sample_loader = batch_data(int_text, 4, 10)
dataiter = iter(sample_loader)
sample_x, sample_y = dataiter.next()
print("Sample input size ", sample_x.size())
print("Sample input:\n", sample_x)
print()
print("Sample targets size: ", sample_y.size())
print("Sample targets:\n", sample_y)
###Output
Sample input size torch.Size([10, 4])
Sample input:
tensor([[ 1, 0, 0, 13],
[ 400, 1125, 1, 11],
[ 5, 1076, 8, 186],
[ 1, 91, 59, 15],
[ 3, 53, 11, 43],
[ 44, 51, 6, 693],
[ 345, 5476, 1, 0],
[ 412, 1, 313, 57],
[ 1, 0, 0, 16],
[ 548, 20, 6, 501]], dtype=torch.int32)
Sample targets size: torch.Size([10])
Sample targets:
tensor([108, 35, 28, 2, 806, 23, 0, 1, 77, 1], dtype=torch.int32)
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[17, 18, 19, 20, 21],
[39, 40, 41, 42, 43],
[18, 19, 20, 21, 22],
[ 7, 8, 9, 10, 11],
[26, 27, 28, 29, 30],
[43, 44, 45, 46, 47],
[31, 32, 33, 34, 35],
[19, 20, 21, 22, 23],
[33, 34, 35, 36, 37],
[32, 33, 34, 35, 36]], dtype=torch.int32)
torch.Size([10])
tensor([22, 44, 23, 12, 31, 48, 36, 24, 38, 37], dtype=torch.int32)
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.vocab_size = vocab_size
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.output_size = output_size
self.n_layers = n_layers
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
self.dropout = nn.Dropout(dropout)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
nn_input = nn_input.long()
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
lstm_out = self.dropout(lstm_out)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
out = self.fc(lstm_out)
out = out.view(batch_size, -1, self.output_size)
out = out[:, -1]
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if train_on_gpu:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if train_on_gpu:
inp, target = inp.cuda(), target.cuda()
rnn.zero_grad()
hidden = tuple([each.data for each in hidden])
# perform backpropagation and optimization
output, hidden = rnn(inp, hidden)
loss = criterion(output, target.long())
loss.backward()
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 300
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 30
# Learning Rate
learning_rate = 0.0002
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = len(vocab_to_int)
# Embedding Dimension
embedding_dim = 500
# Hidden Dimension
hidden_dim = 1000
# Number of RNN Layers
n_layers = 3
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 30 epoch(s)...
Epoch: 1/30 Loss: 5.542212463378906
Epoch: 1/30 Loss: 4.895925772666931
Epoch: 1/30 Loss: 4.716342515468598
Epoch: 1/30 Loss: 4.563274583816528
Epoch: 1/30 Loss: 4.470524339199066
Epoch: 2/30 Loss: 4.342157107095434
Epoch: 2/30 Loss: 4.267198216438294
Epoch: 2/30 Loss: 4.228925843715667
Epoch: 2/30 Loss: 4.192948694705963
Epoch: 2/30 Loss: 4.183900503635407
Epoch: 3/30 Loss: 4.095780112638013
Epoch: 3/30 Loss: 4.048094055175781
Epoch: 3/30 Loss: 4.047820780277252
Epoch: 3/30 Loss: 4.0481199040412905
Epoch: 3/30 Loss: 4.027770218372345
Epoch: 4/30 Loss: 3.975011658938056
Epoch: 4/30 Loss: 3.9300294289588926
Epoch: 4/30 Loss: 3.922672815322876
Epoch: 4/30 Loss: 3.9054472136497496
Epoch: 4/30 Loss: 3.911331367492676
Epoch: 5/30 Loss: 3.850355715080253
Epoch: 5/30 Loss: 3.8214057908058168
Epoch: 5/30 Loss: 3.809553246974945
Epoch: 5/30 Loss: 3.8232222619056704
Epoch: 5/30 Loss: 3.8197561845779417
Epoch: 6/30 Loss: 3.7761310925831655
Epoch: 6/30 Loss: 3.7353790612220763
Epoch: 6/30 Loss: 3.7252446403503416
Epoch: 6/30 Loss: 3.729586566925049
Epoch: 6/30 Loss: 3.7215491671562195
Epoch: 7/30 Loss: 3.6940583658365402
Epoch: 7/30 Loss: 3.6491436610221863
Epoch: 7/30 Loss: 3.6552856369018554
Epoch: 7/30 Loss: 3.6646816473007204
Epoch: 7/30 Loss: 3.668620755672455
Epoch: 8/30 Loss: 3.611472965144425
Epoch: 8/30 Loss: 3.5726354541778567
Epoch: 8/30 Loss: 3.589263671875
Epoch: 8/30 Loss: 3.5918682265281676
Epoch: 8/30 Loss: 3.5941107330322266
Epoch: 9/30 Loss: 3.548078870479274
Epoch: 9/30 Loss: 3.510521921157837
Epoch: 9/30 Loss: 3.5163067588806154
Epoch: 9/30 Loss: 3.527659899234772
Epoch: 9/30 Loss: 3.526112545490265
Epoch: 10/30 Loss: 3.475713120702604
Epoch: 10/30 Loss: 3.4444041004180908
Epoch: 10/30 Loss: 3.4463197193145754
Epoch: 10/30 Loss: 3.468018494606018
Epoch: 10/30 Loss: 3.465712972640991
Epoch: 11/30 Loss: 3.4211348967831525
Epoch: 11/30 Loss: 3.379080743789673
Epoch: 11/30 Loss: 3.3891129322052
Epoch: 11/30 Loss: 3.4033216271400453
Epoch: 11/30 Loss: 3.4290796446800234
Epoch: 12/30 Loss: 3.364989779345913
Epoch: 12/30 Loss: 3.32780682182312
Epoch: 12/30 Loss: 3.3318721280097963
Epoch: 12/30 Loss: 3.344575346946716
Epoch: 12/30 Loss: 3.356425541400909
Epoch: 13/30 Loss: 3.3034176787271528
Epoch: 13/30 Loss: 3.269027329444885
Epoch: 13/30 Loss: 3.2948418641090393
Epoch: 13/30 Loss: 3.2907507238388063
Epoch: 13/30 Loss: 3.2979082493782044
Epoch: 14/30 Loss: 3.2550471892459787
Epoch: 14/30 Loss: 3.208519880771637
Epoch: 14/30 Loss: 3.242331328868866
Epoch: 14/30 Loss: 3.239235269546509
Epoch: 14/30 Loss: 3.245745574951172
Epoch: 15/30 Loss: 3.194709895203562
Epoch: 15/30 Loss: 3.149409110069275
Epoch: 15/30 Loss: 3.1730346984863282
Epoch: 15/30 Loss: 3.193666443824768
Epoch: 15/30 Loss: 3.2061328363418578
Epoch: 16/30 Loss: 3.1483796860674302
Epoch: 16/30 Loss: 3.1065406317710877
Epoch: 16/30 Loss: 3.1244652132987976
Epoch: 16/30 Loss: 3.1440410847663878
Epoch: 16/30 Loss: 3.146430268287659
Epoch: 17/30 Loss: 3.1040270747530743
Epoch: 17/30 Loss: 3.0633580613136293
Epoch: 17/30 Loss: 3.074093391418457
Epoch: 17/30 Loss: 3.086872152328491
Epoch: 17/30 Loss: 3.113299153327942
Epoch: 18/30 Loss: 3.0528990580391175
Epoch: 18/30 Loss: 3.017056652545929
Epoch: 18/30 Loss: 3.0252189779281617
Epoch: 18/30 Loss: 3.0593305611610413
Epoch: 18/30 Loss: 3.0659578919410704
Epoch: 19/30 Loss: 3.002933645787489
Epoch: 19/30 Loss: 2.966243000984192
Epoch: 19/30 Loss: 2.9975133166313173
Epoch: 19/30 Loss: 2.9976149559020997
Epoch: 19/30 Loss: 3.0162581768035888
Epoch: 20/30 Loss: 2.966088581183219
Epoch: 20/30 Loss: 2.9232900500297547
Epoch: 20/30 Loss: 2.941220899105072
Epoch: 20/30 Loss: 2.9631529750823975
Epoch: 20/30 Loss: 2.9715343270301817
Epoch: 21/30 Loss: 2.932411232563109
Epoch: 21/30 Loss: 2.888529004573822
Epoch: 21/30 Loss: 2.905654210090637
Epoch: 21/30 Loss: 2.9194550075531005
Epoch: 21/30 Loss: 2.9290501523017882
Epoch: 22/30 Loss: 2.882127876271937
Epoch: 22/30 Loss: 2.843546413421631
Epoch: 22/30 Loss: 2.86730233335495
Epoch: 22/30 Loss: 2.892877109527588
Epoch: 22/30 Loss: 2.8914437890052795
Epoch: 23/30 Loss: 2.8472109110480344
Epoch: 23/30 Loss: 2.7972644090652468
Epoch: 23/30 Loss: 2.826973198413849
Epoch: 23/30 Loss: 2.84661204624176
Epoch: 23/30 Loss: 2.857697295188904
Epoch: 24/30 Loss: 2.8112276293881013
Epoch: 24/30 Loss: 2.7634284324645995
Epoch: 24/30 Loss: 2.789399490356445
Epoch: 24/30 Loss: 2.8077636613845827
Epoch: 24/30 Loss: 2.817365035057068
Epoch: 25/30 Loss: 2.767851111577692
Epoch: 25/30 Loss: 2.735499891757965
Epoch: 25/30 Loss: 2.7528936777114867
Epoch: 25/30 Loss: 2.768033914089203
Epoch: 25/30 Loss: 2.7931802878379823
Epoch: 26/30 Loss: 2.7360093939096073
Epoch: 26/30 Loss: 2.706914391517639
Epoch: 26/30 Loss: 2.72361363363266
Epoch: 26/30 Loss: 2.7326907453536986
Epoch: 26/30 Loss: 2.7498406858444215
Epoch: 27/30 Loss: 2.7023294440031296
Epoch: 27/30 Loss: 2.6692112035751343
Epoch: 27/30 Loss: 2.681900881290436
Epoch: 27/30 Loss: 2.7061347808837892
Epoch: 27/30 Loss: 2.7114641613960266
Epoch: 28/30 Loss: 2.6720938471826474
Epoch: 28/30 Loss: 2.6339118866920472
Epoch: 28/30 Loss: 2.650689570903778
Epoch: 28/30 Loss: 2.6737491626739502
Epoch: 28/30 Loss: 2.6799437975883484
Epoch: 29/30 Loss: 2.6368560205383145
Epoch: 29/30 Loss: 2.5979518666267394
Epoch: 29/30 Loss: 2.6123044695854185
Epoch: 29/30 Loss: 2.642937618255615
Epoch: 29/30 Loss: 2.662935221672058
Epoch: 30/30 Loss: 2.611776029708329
Epoch: 30/30 Loss: 2.573218704223633
Epoch: 30/30 Loss: 2.5892706742286684
Epoch: 30/30 Loss: 2.609111734390259
Epoch: 30/30 Loss: 2.6288430924415587
Model Trained and Saved
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) It is said typically a few hundred for the embedding dimension is normally a good choice, so I started with it set to 256. I also initially set the hidden dimension to 500. However with these values set the model did not seem to be minimising the losss enough. SO I increased the embedding dimension to 500 and the hidden dimension to 1000 as this should allow the model to handle more complicated relationships. Typically the number of layers in an LSTM is between 1 and 3, so I set number of layers to 3. I initially set the learning rate to 0.001 but the loss was not decreasing, in fact it possibly was increasing. So I reduced it to 0.0002 and the loss is decreasing. It may be possible to find a slightly more optimal learning rate between these two values but I felt 0.0002 was good enough. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
jerry:, so you know that guy, uh, raise, or not.
elaine: oh, i can't. i'm sorry.
kramer: well, you know, it's not the same.
jerry: i don't know, i know that... you know i think i can get this thing over here. i don't have any money.
kramer:(to jerry) hey. you got this straight? hey, look, i got a lot better than ya.
susan:(to jerry) hey, you know what, what about this?
george: what? i mean, you know, i don't know what i mean..
jerry: i can't believe i saw that guy who has a good idea for the himalayan in gymnastics food.
kramer: well, i got a feeling about you two.(to jerry) hey.
elaine: hey.
jerry: hey.
george: hey, how you doing?
kramer: well, you know, i was wondering, i was just wondering... i have a very good feeling about this guy. i don't know what to do.
jerry:(to kramer) you know, you don't have anything in the first place.
elaine: well, i guess i can see the whole story.
jerry:(pointing) what is that?
jerry: oh, i was thinking of myself. you know what you think? i mean, you think i have an idea, you have no idea how much i am about it. but, if you don't mind, i can't stand you! i can't believe this is the first time i ever ever heard of it.
jerry: i don't understand how it was such an attractive woman... she had a good time.
elaine: oh, yeah, i got it. i gotta see if i could get a picture.(he leaves)
george:(to elaine) so, what do you think of all that?
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
The TV Script is Not PerfectIt's ok if the TV script doesn't make perfect sense. It should look like alternating lines of dialogue, here is one such example of a few generated lines. Example generated script>jerry: what about me?>>jerry: i don't have to wait.>>kramer:(to the sales table)>>elaine:(to jerry) hey, look at this, i'm a good doctor.>>newman:(to elaine) you think i have no idea of this...>>elaine: oh, you better take the phone, and he was a little nervous.>>kramer:(to the phone) hey, hey, jerry, i don't want to be a little bit.(to kramer and jerry) you can't.>>jerry: oh, yeah. i don't even know, i know.>>jerry:(to the phone) oh, i know.>>kramer:(laughing) you know...(to jerry) you don't know.You can see that there are multiple characters that say (somewhat) complete sentences, but it doesn't have to be perfect! It takes quite a while to get good results, and often, you'll have to use a smaller vocabulary (and discard uncommon words), or get more data. The Seinfeld dataset is about 3.4 MB, which is big enough for our purposes; for script generation you'll want more than 1 MB of text, generally. Submitting This ProjectWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_tv_script_generation.ipynb" and save another copy as an HTML file by clicking "File" -> "Download as.."->"html". Include the "helper.py" and "problem_unittests.py" files in your submission. Once you download these files, compress them into one zip file for submission.
###Code
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# Count the several word occurrence
c = Counter(text)
vocab_to_int = {}
int_to_vocab = {}
for (idx,(e,cnt)) in enumerate(c.most_common(),0):
vocab_to_int[e] = idx
int_to_vocab[idx] = e
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
tokens = {
'.' : '<PERIOD>',
',' : '<COMMA>',
'"' : '<QUOTATION>',
';' : '<SEMICOLON>',
'!' : '<EXCLAMATION>',
'?' : '<QUESTION>',
'(' : '<OPEN_PAREN>',
')' : '<CLOSE_PAREN>',
'-' : '<DASH>',
'\n' : '<NEW_LINE>'
}
return tokens
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
print(token_dict)
###Output
{'.': '<PERIOD>', ',': '<COMMA>', '"': '<QUOTATION>', ';': '<SEMICOLON>', '!': '<EXCLAMATION>', '?': '<QUESTION>', '(': '<OPEN_PAREN>', ')': '<CLOSE_PAREN>', '-': '<DASH>', '\n': '<NEW_LINE>'}
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
#batch_size_total = batch_size * sequence_length
#n_batches = len(words)//batch_size_total
features, targets = [], []
for ii in range(len(words)):
ii_end = ii + sequence_length
if ii_end < len(words):
features.append(words[ii:ii_end])
targets.append(words[ii_end])
features = np.asarray(features, dtype=int)
targets = np.asarray(targets, dtype=int)
# create Tensor datasets
data = TensorDataset(torch.from_numpy(features), torch.from_numpy(targets))
# make sure to SHUFFLE your data
loader = DataLoader(data, shuffle=True, batch_size=batch_size)
# return a dataloader
return loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 5, 6, 7, 8, 9],
[ 19, 20, 21, 22, 23],
[ 22, 23, 24, 25, 26],
[ 43, 44, 45, 46, 47],
[ 8, 9, 10, 11, 12],
[ 28, 29, 30, 31, 32],
[ 26, 27, 28, 29, 30],
[ 14, 15, 16, 17, 18],
[ 4, 5, 6, 7, 8],
[ 9, 10, 11, 12, 13]])
torch.Size([10])
tensor([ 10, 24, 27, 48, 13, 33, 31, 19, 9, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.vocab_size = vocab_size
self.output_size = output_size
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.n_layers = n_layers
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
#self.dropout = nn.Dropout(0.3)
self.fc = nn.Linear(hidden_dim, output_size)
#self.sig = nn.Sigmoid()
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
# embedding and lstm_out
nn_input = nn_input.long()
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
#out = self.dropout(lstm_out)
output = self.fc(lstm_out)
# sigmoid function
#out = self.sig(out)
# reshape to be batch_size first
output = output.view(batch_size, -1, self.output_size)
out = output [:, -1] # get last batch of labels
# return last sigmoid output and hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
clip=5 # gradient clipping
# move data to GPU, if available
if(train_on_gpu):
rnn.cuda()
inp, target = inp.cuda(), target.cuda()
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
hidden = tuple([each.data for each in hidden])
# zero accumulated gradients
rnn.zero_grad()
# get the output from the model
output, hidden = rnn(inp, hidden)
# calculate the loss and perform backprop
loss = criterion(output, target)
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(rnn.parameters(), clip)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from workspace_utils import active_session
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
with active_session():
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 15
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 256
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 15 epoch(s)...
Epoch: 1/15 Loss: 5.532825699806214
Epoch: 1/15 Loss: 4.8629024744033815
Epoch: 1/15 Loss: 4.669433483123779
Epoch: 1/15 Loss: 4.504791158676148
Epoch: 1/15 Loss: 4.428042637825012
Epoch: 1/15 Loss: 4.367772954463959
Epoch: 1/15 Loss: 4.298104788780212
Epoch: 1/15 Loss: 4.2726039743423465
Epoch: 1/15 Loss: 4.24163552904129
Epoch: 1/15 Loss: 4.206749546051025
Epoch: 1/15 Loss: 4.198954015254975
Epoch: 1/15 Loss: 4.166087312221527
Epoch: 1/15 Loss: 4.141734188556671
Epoch: 2/15 Loss: 4.046153443516593
Epoch: 2/15 Loss: 3.9560570011138916
Epoch: 2/15 Loss: 3.9361091833114625
Epoch: 2/15 Loss: 3.944333690166473
Epoch: 2/15 Loss: 3.9290042114257813
Epoch: 2/15 Loss: 3.9216447649002073
Epoch: 2/15 Loss: 3.9201399416923524
Epoch: 2/15 Loss: 3.9176893124580383
Epoch: 2/15 Loss: 3.9068938250541687
Epoch: 2/15 Loss: 3.9046198434829713
Epoch: 2/15 Loss: 3.9057768349647524
Epoch: 2/15 Loss: 3.887667184829712
Epoch: 2/15 Loss: 3.9130668120384215
Epoch: 3/15 Loss: 3.8184306687983933
Epoch: 3/15 Loss: 3.7451746921539306
Epoch: 3/15 Loss: 3.728155478000641
Epoch: 3/15 Loss: 3.7250712056159974
Epoch: 3/15 Loss: 3.745212097644806
Epoch: 3/15 Loss: 3.75275874710083
Epoch: 3/15 Loss: 3.74138139629364
Epoch: 3/15 Loss: 3.7362422189712525
Epoch: 3/15 Loss: 3.742794972896576
Epoch: 3/15 Loss: 3.754491403102875
Epoch: 3/15 Loss: 3.7569237914085387
Epoch: 3/15 Loss: 3.795139883995056
Epoch: 3/15 Loss: 3.7469643139839173
Epoch: 4/15 Loss: 3.6624901026518106
Epoch: 4/15 Loss: 3.601311396598816
Epoch: 4/15 Loss: 3.6135661435127258
Epoch: 4/15 Loss: 3.612563529968262
Epoch: 4/15 Loss: 3.6107242093086245
Epoch: 4/15 Loss: 3.634239720821381
Epoch: 4/15 Loss: 3.6241551036834716
Epoch: 4/15 Loss: 3.6360190949440003
Epoch: 4/15 Loss: 3.629854612350464
Epoch: 4/15 Loss: 3.638748689174652
Epoch: 4/15 Loss: 3.660061673641205
Epoch: 4/15 Loss: 3.6517494859695434
Epoch: 4/15 Loss: 3.6782753829956056
Epoch: 5/15 Loss: 3.5748015845154093
Epoch: 5/15 Loss: 3.5030285787582396
Epoch: 5/15 Loss: 3.529293653011322
Epoch: 5/15 Loss: 3.544752722263336
Epoch: 5/15 Loss: 3.5262277255058287
Epoch: 5/15 Loss: 3.5269879837036133
Epoch: 5/15 Loss: 3.5471931715011595
Epoch: 5/15 Loss: 3.537584321975708
Epoch: 5/15 Loss: 3.562373523712158
Epoch: 5/15 Loss: 3.5504064817428587
Epoch: 5/15 Loss: 3.5773405966758727
Epoch: 5/15 Loss: 3.593911103248596
Epoch: 5/15 Loss: 3.5838500723838806
Epoch: 6/15 Loss: 3.515779958543886
Epoch: 6/15 Loss: 3.4414275498390197
Epoch: 6/15 Loss: 3.4355605812072754
Epoch: 6/15 Loss: 3.4684629378318785
Epoch: 6/15 Loss: 3.4434701709747313
Epoch: 6/15 Loss: 3.4645385398864748
Epoch: 6/15 Loss: 3.4650891432762148
Epoch: 6/15 Loss: 3.4813038401603698
Epoch: 6/15 Loss: 3.49388755941391
Epoch: 6/15 Loss: 3.5032606301307676
Epoch: 6/15 Loss: 3.5207064938545227
Epoch: 6/15 Loss: 3.5307855820655822
Epoch: 6/15 Loss: 3.51517715883255
Epoch: 7/15 Loss: 3.4456181745165027
Epoch: 7/15 Loss: 3.3771461005210877
Epoch: 7/15 Loss: 3.3846773090362547
Epoch: 7/15 Loss: 3.4048191170692443
Epoch: 7/15 Loss: 3.4103379836082457
Epoch: 7/15 Loss: 3.423795045852661
Epoch: 7/15 Loss: 3.4269042444229125
Epoch: 7/15 Loss: 3.411906246185303
Epoch: 7/15 Loss: 3.43211283493042
Epoch: 7/15 Loss: 3.463759604930878
Epoch: 7/15 Loss: 3.463076445579529
Epoch: 7/15 Loss: 3.4536395978927614
Epoch: 7/15 Loss: 3.48320840215683
Epoch: 8/15 Loss: 3.4076260366429976
Epoch: 8/15 Loss: 3.3300515875816346
Epoch: 8/15 Loss: 3.3234596576690674
Epoch: 8/15 Loss: 3.357691041469574
Epoch: 8/15 Loss: 3.3792418384552003
Epoch: 8/15 Loss: 3.3656737823486327
Epoch: 8/15 Loss: 3.394780083656311
Epoch: 8/15 Loss: 3.4004516296386718
Epoch: 8/15 Loss: 3.3816783595085145
Epoch: 8/15 Loss: 3.4069102473258974
Epoch: 8/15 Loss: 3.4120297656059266
Epoch: 8/15 Loss: 3.4212848253250123
Epoch: 8/15 Loss: 3.4585192375183107
Epoch: 9/15 Loss: 3.3459856111567823
Epoch: 9/15 Loss: 3.287767780303955
Epoch: 9/15 Loss: 3.2996500387191774
Epoch: 9/15 Loss: 3.30378905582428
Epoch: 9/15 Loss: 3.333864600658417
Epoch: 9/15 Loss: 3.331572470188141
Epoch: 9/15 Loss: 3.3586099162101744
Epoch: 9/15 Loss: 3.3454168720245363
Epoch: 9/15 Loss: 3.3660482664108278
Epoch: 9/15 Loss: 3.3933242354393007
Epoch: 9/15 Loss: 3.373943386554718
Epoch: 9/15 Loss: 3.406001955509186
Epoch: 9/15 Loss: 3.4115171217918396
Epoch: 10/15 Loss: 3.3297511086990466
Epoch: 10/15 Loss: 3.2565266275405884
Epoch: 10/15 Loss: 3.27760705947876
Epoch: 10/15 Loss: 3.276241961479187
Epoch: 10/15 Loss: 3.292051549911499
Epoch: 10/15 Loss: 3.3008588137626647
Epoch: 10/15 Loss: 3.296488829612732
Epoch: 10/15 Loss: 3.3341244072914122
Epoch: 10/15 Loss: 3.3363446407318116
Epoch: 10/15 Loss: 3.3583689193725585
Epoch: 10/15 Loss: 3.3426983699798583
Epoch: 10/15 Loss: 3.364784192085266
Epoch: 10/15 Loss: 3.3697974729537963
Epoch: 11/15 Loss: 3.30349520829932
Epoch: 11/15 Loss: 3.2400580887794495
Epoch: 11/15 Loss: 3.258445496082306
Epoch: 11/15 Loss: 3.257075644016266
Epoch: 11/15 Loss: 3.2616396007537842
Epoch: 11/15 Loss: 3.279477642059326
Epoch: 11/15 Loss: 3.282373327732086
Epoch: 11/15 Loss: 3.2984293656349184
Epoch: 11/15 Loss: 3.3059213371276854
Epoch: 11/15 Loss: 3.3199121255874635
Epoch: 11/15 Loss: 3.335271245479584
Epoch: 11/15 Loss: 3.3375997610092165
Epoch: 11/15 Loss: 3.325686914920807
Epoch: 12/15 Loss: 3.266997230311296
Epoch: 12/15 Loss: 3.2111249437332154
Epoch: 12/15 Loss: 3.216985863685608
Epoch: 12/15 Loss: 3.2227586827278136
Epoch: 12/15 Loss: 3.259324100971222
Epoch: 12/15 Loss: 3.255472243309021
Epoch: 12/15 Loss: 3.2459015679359435
Epoch: 12/15 Loss: 3.2820868468284607
Epoch: 12/15 Loss: 3.2822631974220275
Epoch: 12/15 Loss: 3.2923036961555483
Epoch: 12/15 Loss: 3.2835958876609803
Epoch: 12/15 Loss: 3.3187372236251833
Epoch: 12/15 Loss: 3.331912058353424
Epoch: 13/15 Loss: 3.244559086513224
Epoch: 13/15 Loss: 3.18434060382843
Epoch: 13/15 Loss: 3.1857465381622316
Epoch: 13/15 Loss: 3.202395000934601
Epoch: 13/15 Loss: 3.2047086901664734
Epoch: 13/15 Loss: 3.2394026832580565
Epoch: 13/15 Loss: 3.228401366233826
Epoch: 13/15 Loss: 3.2566629428863525
Epoch: 13/15 Loss: 3.257145313739777
Epoch: 13/15 Loss: 3.2559934039115905
Epoch: 13/15 Loss: 3.297042961597443
Epoch: 13/15 Loss: 3.2976802291870118
Epoch: 13/15 Loss: 3.2991899905204773
Epoch: 14/15 Loss: 3.2246991827761056
Epoch: 14/15 Loss: 3.1642752180099487
Epoch: 14/15 Loss: 3.177460174560547
Epoch: 14/15 Loss: 3.1856022624969484
Epoch: 14/15 Loss: 3.1972383599281313
Epoch: 14/15 Loss: 3.207955976009369
Epoch: 14/15 Loss: 3.2303984208106993
Epoch: 14/15 Loss: 3.214177114486694
Epoch: 14/15 Loss: 3.2492737979888915
Epoch: 14/15 Loss: 3.250202163219452
Epoch: 14/15 Loss: 3.2708270101547243
Epoch: 14/15 Loss: 3.266427955150604
Epoch: 14/15 Loss: 3.2689873123168947
Epoch: 15/15 Loss: 3.2057445130481073
Epoch: 15/15 Loss: 3.1529199509620667
Epoch: 15/15 Loss: 3.1448832812309266
Epoch: 15/15 Loss: 3.1661043372154234
Epoch: 15/15 Loss: 3.183661512851715
Epoch: 15/15 Loss: 3.1891458773612977
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** I trained the network with the following hyperparameters. - sequence_length = 10, since the average number of word in each line is 5, setting a sequence length of 10 could guarantee that in average the network has information on 2 different lines to extract better the context´- num_epochs = 15, since it is a moderate big network I thought that 15 epochs could be traversed in a few hours and than analzying the trend in the loss I could decide if training more or if the results are sufficient- learning_rate = 0.001, in all the networks trained during the course I always experience a sufficient performance with this learning rate and therefore I used it also here- embedding_dim = 256, in the sentiment Analysis network I used a embedding size of 400. Having an embedding dimension of 400 with 46k words it generates 18 milion parameters. In order to reduce the number of parameters I tried to cut the embedding dimension to 256. - hidden_dim = 256, here I used the value used in the Sentiment Analysis RNN- n_layers = 2, usually the suggested number of hidden layers is between 2 and 3. Again to keep the dimension of the network limited I decided to use 2 layers (note: this was the same value used in the Sentiment Analysis RNN)With this parameters I obtained a training loss of 3.26 that is better thatn the required one (3.5) and I decided to stick with this hyperparameters for this project. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
import numpy as np
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:45: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
vocab = set(text)
vocab_to_int = {}
int_to_vocab = {}
for i, word in enumerate(vocab):
vocab_to_int[word] = i
int_to_vocab[i] = word
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token_dict = {
'.':'||period||',
',':'||comma||',
'"':'||quotation_mark||',
';':'||semicolon||',
'!':'||exclamation_mark||',
'?':'||question_mark||',
'(':'||left_parentheses||',
')':'||right_parentheses||',
'-':'||dash||',
'\n':'||return||',
}
return token_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
n_batches = len(words)//batch_size
print(n_batches)
# only full batches
words = words[:n_batches*batch_size]
x, y = [], []
for i in range(0, len(words)-sequence_length):
x_batch = words[i:i+sequence_length]
y_batch = words[i+sequence_length]
x.append(x_batch)
y.append(y_batch)
data = TensorDataset(torch.from_numpy(np.asarray(x)), torch.from_numpy(np.array(y)))
dataloader = DataLoader(data, shuffle=True, batch_size = batch_size)
# return a dataloader
return dataloader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
5
torch.Size([10, 5])
tensor([[ 12, 13, 14, 15, 16],
[ 27, 28, 29, 30, 31],
[ 40, 41, 42, 43, 44],
[ 23, 24, 25, 26, 27],
[ 6, 7, 8, 9, 10],
[ 1, 2, 3, 4, 5],
[ 35, 36, 37, 38, 39],
[ 2, 3, 4, 5, 6],
[ 33, 34, 35, 36, 37],
[ 38, 39, 40, 41, 42]])
torch.Size([10])
tensor([ 17, 32, 45, 28, 11, 6, 40, 7, 38, 43])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define model layers
# embedding and LSTM layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
# dropout layer
#self.dropout = nn.Dropout(0.3)
# linear and sigmoid layers
self.fc = nn.Linear(hidden_dim, output_size)
#self.sig = nn.Sigmoid()
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
# embeddings and lstm_out
nn_input = nn_input.long()
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
#out = self.dropout(lstm_out)
output = self.fc(lstm_out)
# reshape to be batch_size first
output = output.view(batch_size, -1, self.output_size)
output = output[:, -1] # get last batch of labels
# return one batch of output word scores and the hidden state
return output, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
clip=5 # gradient clipping
# move data to GPU, if available
if(train_on_gpu):
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
hidden = tuple([each.data for each in hidden])
rnn.zero_grad()
output, hidden = rnn(inp, hidden)
#print(output.shape)
#print(hidden)
#print(output)
loss = criterion(output.squeeze(), target.long())
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(rnn.parameters(), clip)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
#print(loss.item())
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = .001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 5000
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 5.506498638153076
Epoch: 1/10 Loss: 4.877447241783142
Epoch: 1/10 Loss: 4.644863019943237
Epoch: 1/10 Loss: 4.550259655952454
Epoch: 1/10 Loss: 4.485655527114869
Epoch: 1/10 Loss: 4.377468444824219
Epoch: 1/10 Loss: 4.3330537147521975
Epoch: 1/10 Loss: 4.297451208114624
Epoch: 1/10 Loss: 4.272087756633758
Epoch: 1/10 Loss: 4.234725869178772
Epoch: 1/10 Loss: 4.201399923324585
Epoch: 1/10 Loss: 4.192778712272644
Epoch: 1/10 Loss: 4.18784255695343
Epoch: 2/10 Loss: 4.060189198117611
Epoch: 2/10 Loss: 3.9615867052078246
Epoch: 2/10 Loss: 3.9691027655601503
Epoch: 2/10 Loss: 3.9519200410842896
Epoch: 2/10 Loss: 3.9647532691955565
Epoch: 2/10 Loss: 3.934483114242554
Epoch: 2/10 Loss: 3.9386162099838256
Epoch: 2/10 Loss: 3.9188381514549255
Epoch: 2/10 Loss: 3.9008119230270384
Epoch: 2/10 Loss: 3.9270327596664427
Epoch: 2/10 Loss: 3.9407107014656066
Epoch: 2/10 Loss: 3.9228728451728823
Epoch: 2/10 Loss: 3.934419768333435
Epoch: 3/10 Loss: 3.8343563892624597
Epoch: 3/10 Loss: 3.7426507859230043
Epoch: 3/10 Loss: 3.7557671813964846
Epoch: 3/10 Loss: 3.7736143598556517
Epoch: 3/10 Loss: 3.7458421902656553
Epoch: 3/10 Loss: 3.7396921286582945
Epoch: 3/10 Loss: 3.781070989608765
Epoch: 3/10 Loss: 3.7556300292015075
Epoch: 3/10 Loss: 3.7549902181625368
Epoch: 3/10 Loss: 3.773991184234619
Epoch: 3/10 Loss: 3.7551934151649475
Epoch: 3/10 Loss: 3.7813932132720947
Epoch: 3/10 Loss: 3.7605673551559446
Epoch: 4/10 Loss: 3.687348848039454
Epoch: 4/10 Loss: 3.6174276361465454
Epoch: 4/10 Loss: 3.632415452003479
Epoch: 4/10 Loss: 3.60900665807724
Epoch: 4/10 Loss: 3.6264643836021424
Epoch: 4/10 Loss: 3.652281243801117
Epoch: 4/10 Loss: 3.6259088106155395
Epoch: 4/10 Loss: 3.641796570777893
Epoch: 4/10 Loss: 3.608736423969269
Epoch: 4/10 Loss: 3.6658333034515382
Epoch: 4/10 Loss: 3.64899453496933
Epoch: 4/10 Loss: 3.6710940074920653
Epoch: 4/10 Loss: 3.671676317214966
Epoch: 5/10 Loss: 3.593381263746703
Epoch: 5/10 Loss: 3.5083509378433226
Epoch: 5/10 Loss: 3.5186539788246156
Epoch: 5/10 Loss: 3.526905979633331
Epoch: 5/10 Loss: 3.526041862487793
Epoch: 5/10 Loss: 3.540880611896515
Epoch: 5/10 Loss: 3.5590038523674012
Epoch: 5/10 Loss: 3.555765299320221
Epoch: 5/10 Loss: 3.5798491163253785
Epoch: 5/10 Loss: 3.5680856795310976
Epoch: 5/10 Loss: 3.5748853750228884
Epoch: 5/10 Loss: 3.5902964310646057
Epoch: 5/10 Loss: 3.60514697933197
Epoch: 6/10 Loss: 3.52073567268277
Epoch: 6/10 Loss: 3.434302396297455
Epoch: 6/10 Loss: 3.429089115142822
Epoch: 6/10 Loss: 3.467383470535278
Epoch: 6/10 Loss: 3.461300371170044
Epoch: 6/10 Loss: 3.477927261829376
Epoch: 6/10 Loss: 3.488366159915924
Epoch: 6/10 Loss: 3.5074389123916627
Epoch: 6/10 Loss: 3.479556882381439
Epoch: 6/10 Loss: 3.5131272134780884
Epoch: 6/10 Loss: 3.5079288334846495
Epoch: 6/10 Loss: 3.5293037824630735
Epoch: 6/10 Loss: 3.550463225841522
Epoch: 7/10 Loss: 3.45417728034918
Epoch: 7/10 Loss: 3.394913876056671
Epoch: 7/10 Loss: 3.393556586742401
Epoch: 7/10 Loss: 3.4119445657730103
Epoch: 7/10 Loss: 3.420303556442261
Epoch: 7/10 Loss: 3.417772924423218
Epoch: 7/10 Loss: 3.451010533809662
Epoch: 7/10 Loss: 3.4430946741104127
Epoch: 7/10 Loss: 3.4528610763549805
Epoch: 7/10 Loss: 3.4568219618797302
Epoch: 7/10 Loss: 3.452462176799774
Epoch: 7/10 Loss: 3.484591769218445
Epoch: 7/10 Loss: 3.4895895872116087
Epoch: 8/10 Loss: 3.409822672359214
Epoch: 8/10 Loss: 3.341217004299164
Epoch: 8/10 Loss: 3.3462885613441467
Epoch: 8/10 Loss: 3.3690835256576537
Epoch: 8/10 Loss: 3.3675912661552427
Epoch: 8/10 Loss: 3.374692803859711
Epoch: 8/10 Loss: 3.391706500530243
Epoch: 8/10 Loss: 3.396349504947662
Epoch: 8/10 Loss: 3.4337016572952272
Epoch: 8/10 Loss: 3.4108635573387147
Epoch: 8/10 Loss: 3.427956174850464
Epoch: 8/10 Loss: 3.4229448461532592
Epoch: 8/10 Loss: 3.441431237220764
Epoch: 9/10 Loss: 3.366252738335901
Epoch: 9/10 Loss: 3.300349271774292
Epoch: 9/10 Loss: 3.31926242685318
Epoch: 9/10 Loss: 3.2994600176811217
Epoch: 9/10 Loss: 3.3354625854492186
Epoch: 9/10 Loss: 3.3441440467834473
Epoch: 9/10 Loss: 3.3583695521354677
Epoch: 9/10 Loss: 3.3639197597503663
Epoch: 9/10 Loss: 3.3809996342658994
Epoch: 9/10 Loss: 3.371726893424988
Epoch: 9/10 Loss: 3.3805938386917114
Epoch: 9/10 Loss: 3.423382860183716
Epoch: 9/10 Loss: 3.4237439546585082
Epoch: 10/10 Loss: 3.3301968308519725
Epoch: 10/10 Loss: 3.2772505798339844
Epoch: 10/10 Loss: 3.28263094997406
Epoch: 10/10 Loss: 3.2998509521484376
Epoch: 10/10 Loss: 3.299762092113495
Epoch: 10/10 Loss: 3.299778486251831
Epoch: 10/10 Loss: 3.319560504436493
Epoch: 10/10 Loss: 3.3200579319000245
Epoch: 10/10 Loss: 3.333797775268555
Epoch: 10/10 Loss: 3.3373168268203734
Epoch: 10/10 Loss: 3.3615681343078614
Epoch: 10/10 Loss: 3.380522684574127
Epoch: 10/10 Loss: 3.396915725708008
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here)Tried many combination ofr hyperparameter and dropout.1. **Basic params to test model functionality -** * sequence_length = 10 * batch_size=50 * num_epochs=1 * learning_rate = 0.001 * vocab_size=len(vocab_to_int) +1 * output_size=vocab_size * embedding_dim=50 * hidden_dim=10 * n_layers=2 **Result** - Training was happining 2. **Set 1** - Increased hidden dimension only * sequence_length = 10 * batch_size=50 * num_epochs=1 * learning_rate = 0.001 * vocab_size=len(vocab_to_int) +1 * output_size=vocab_size * embedding_dim=50 * hidden_dim=128 * n_layers=2 **Result** - loss was stuck ~5 and not decreasing beyond 3. **Set 2** -- Increased hidden and embedding dimension * sequence_length = 20 * batch_size=50 * num_epochs=1 * learning_rate = 0.001 * vocab_size=len(vocab_to_int) +1 * output_size=vocab_size * embedding_dim=100 * hidden_dim=128 * n_layers=2 **Result** - again loss was stuck ~5 and not decreasing beyond 4. **Set 3** -- Increased hidden dimension more and batch size * sequence_length = 10 * batch_size=128 * num_epochs=2 * learning_rate = 0.01 * vocab_size=len(vocab_to_int) +1 * output_size=vocab_size * embedding_dim=100 * hidden_dim=256 * n_layers=2 **Result** - loss was increasing and decreasing5. **Set 4** -- Increased epochs and embedding dimension and removed +1 from vacab size as no padding here * sequence_length = 10 * batch_size=128 * num_epochs=20 * learning_rate = 0.01 * vocab_size=len(vocab_to_int) no padding * output_size=vocab_size * embedding_dim=300 * hidden_dim=256 * n_layers=2 **Result** - loss was decreasing in start but stable after 10 epochs6. **Set 5** -- decreased embedding dimension to 200 and removed dropout layer * sequence_length = 10 * batch_size=128 * num_epochs=10 * learning_rate = 0.001 * vocab_size=len(vocab_to_int) no padding * output_size=vocab_size * embedding_dim=200 * hidden_dim=256 * n_layers=2 **Result** - Finally Loss: 3.396915725708008, script also seems okay. 7. **Set 6** -- Increased embedding dimension to 300 and 3 LSTM layers * sequence_length = 10 * batch_size=128 * num_epochs=10 * learning_rate = 0.001 * vocab_size=len(vocab_to_int) no padding * output_size=vocab_size * embedding_dim=300 * hidden_dim=256 * n_layers=3 **Result** - Final loss is 3.476551958018685, and not happy with script final sumission is with set 5 * Kept sequence_length = 10 , sentences in TV script in the dataset are in similar length * embedding_dim=300 is standard * hidden_dim=256 From sentiment mini project * n_layers=2 From sentiment mini project, tried with 3 as well but didnt get better results. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:48: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(texts):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
vocab_to_int = {}
int_to_vocab = {}
# # TODO: Implement Function
# # generate tokens by splitting
# tokens = [word for word in text.split() for text in texts]
# #remove duplicates
# tokens = list(dict.fromkeys(tokens))
words = Counter(texts)
#sort words
# sorted_words = sorted(words, key= words.get, reverse = True)
for i, token in enumerate(words):
vocab_to_int[token] = i
int_to_vocab[i] = token
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return { '.':'|dot|',
',':'|comma|',
'"':'|quote|',
';':'|semicolon|',
'!':'|exclamation|',
'?':'|question|',
'(':'|open_paren|',
')':'|close_parn|',
'-':'|dash|',
'\n':'|newline|' }
print(token_lookup())
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
{'.': '|dot|', ',': '|comma|', '"': '|quote|', ';': '|semicolon|', '!': '|exclamation|', '?': '|question|', '(': '|open_paren|', ')': '|close_parn|', '-': '|dash|', '\n': '|newline|'}
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size, verbose = 1):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
words = list(words)
# TODO: Implement function
# print('words are {}, sequence_length is {}, batch size is {}'.format(words, sequence_length, batch_size))
if len(words) <= sequence_length:
raise ValueError('words must be longer than sequence_length')
my_x = []
my_y = []
for _ in range(len(words)-sequence_length):
if len(words) % 10000 == 0:
if verbose: print('word length is {}'.format(len(words)))
my_x.append([words[i] for i in range(sequence_length)])
my_y.append(words[sequence_length])
words.pop(0)
# print('myx is {}, myy is {}'.format(my_x,my_y))
tensor_x = torch.stack([torch.Tensor(i) for i in my_x]).type(torch.LongTensor) # transform to torch tensors
# tensor_y = torch.stack([torch.Tensor([y]) for y in my_y])
tensor_y = torch.Tensor(my_y).type(torch.LongTensor)
dataset = TensorDataset(tensor_x,tensor_y)
dataloader = DataLoader(dataset,batch_size = batch_size)
# return a dataloader
return dataloader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
# next(iter(dataloader))
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]])
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# store all the variables
self.vocab_size = vocab_size
self.output_size = output_size
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.n_layers = n_layers
# set class variables
print('embedding is accepting {} and {}'.format(vocab_size,embedding_dim))
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.gru1 = nn.GRU(input_size=embedding_dim, hidden_size= hidden_dim, num_layers = n_layers, dropout=dropout, batch_first= True)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
# define model layers
self.fc1 = nn.Linear(hidden_dim, output_size)
self.dropout = nn.Dropout(dropout)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
#embeddings
# print('nn_input is {}'.format(nn_input))
output = self.embedding(nn_input)
## RNN layer
# print(embedded_output)
# print(hidden)
output, hidden = self.lstm(output, hidden)
# print('using hidden shape {}'.format(hidden.shape))
output = output.contiguous().view(-1, self.hidden_dim)
output = self.fc1(output)
# reshape into (batch_size, seq_length, output_size)
output = output.view(batch_size, -1, self.output_size)
# get last batch
output = output[:,-1]
# print('final output shape {}'.format(output.shape))
# return one batch of output word scores and the hidden state
return output, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
weight = next(self.parameters()).data
# print('weight is {}'.format(weight.shape))
if torch.cuda.is_available():
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
# hidden = torch.randn(self.n_layers, batch_size, self.hidden_dim).cuda()
# print('using cuda shape {}'.format(hidden.shape))
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
# initialize hidden state with zero weights, and move to GPU if available
# return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
embedding is accepting 20 and 15
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
cuda = torch.cuda.is_available
# TODO: Implement Function
if cuda:
# print('using cuda')
rnn.cuda()
inp = inp.cuda()
target = target.cuda()
h = tuple(hidden_item.data.cuda() for hidden_item in hidden)
output , h = rnn(inp, h)
#set zero grads
rnn.zero_grad()
optimizer.zero_grad()
loss = criterion(output, target)
# perform backpropagation and optimization
loss.backward()
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
# print('h is {}, hidden is {}'.format(h,hidden))
return loss.item(), h
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
embedding is accepting 20 and 15
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100, train_loader = None):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# testing different sequence length
sequence_lengths_ = range(8,22) # of words in a sequence
# Batch Size
batch_size_ = 128
# data loader - do not change
# Training parameters
# Number of Epochs
num_epochs_ = 3
# Learning Rate
learning_rate_ = 0.001
# Model parameters
# Vocab size
vocab_dict = create_lookup_tables(int_text)
vocab_size = len(vocab_dict[0])
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 200
# Number of RNN Layers
n_layers = 3
# Show stats for every n number of batches
show_every_n_batches_ = 3000
!wget https://s3.amazonaws.com/video.udacity-data.com/topher/2018/May/5b0dea96_workspace-utils/workspace-utils.py
!mv workspace-utils.py workspace_utils.py
from workspace_utils import active_session
# create model and move to gpu if available
rnn_ = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
print('training with cuda')
rnn_.cuda()
# # defining loss and optimization functions for training
# optimizer_ = torch.optim.Adam(rnn_.parameters(), lr=learning_rate_)
# criterion_ = nn.CrossEntropyLoss()
# training the model
with active_session():
for sequence_length_ in sequence_lengths_:
print('sequence_length of {}'.format(sequence_length_))
# defining loss and optimization functions for training
rnn_ = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
optimizer_ = torch.optim.Adam(rnn_.parameters(), lr=learning_rate_)
criterion_ = nn.CrossEntropyLoss()
train_loader_ = batch_data(int_text, sequence_length_, batch_size_, verbose = 0)
trained_rnn = train_rnn(rnn_, batch_size_, optimizer_, criterion_, num_epochs_, show_every_n_batches_, train_loader = train_loader_)
## clean up test model
rnn_ = None
optimizer_ = None
train_loader_ = None
trained_rnn = None
# Data params
# Sequence Length
sequence_length = 21 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# vocab_dict = create_lookup_tables(int_text)
# len(vocab_dict[0])
# Training parameters
# Number of Epochs
num_epochs = 20
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_dict = create_lookup_tables(int_text)
vocab_size = len(vocab_dict[0])+1
print(type(vocab_size))
print(vocab_size)
# vocab_size = 1000
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 200
# Number of RNN Layers
n_layers = 3
# Show stats for every n number of batches
show_every_n_batches = 3000
###Output
<class 'int'>
21388
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
!wget https://s3.amazonaws.com/video.udacity-data.com/topher/2018/May/5b0dea96_workspace-utils/workspace-utils.py
!mv workspace-utils.py workspace_utils.py
from workspace_utils import active_session
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
print('training with cuda')
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
with active_session():
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches, train_loader = train_loader)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
--2020-01-05 16:32:21-- https://s3.amazonaws.com/video.udacity-data.com/topher/2018/May/5b0dea96_workspace-utils/workspace-utils.py
Resolving s3.amazonaws.com (s3.amazonaws.com)... 52.216.170.197
Connecting to s3.amazonaws.com (s3.amazonaws.com)|52.216.170.197|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1540 (1.5K) []
Saving to: ‘workspace-utils.py’
workspace-utils.py 100%[===================>] 1.50K --.-KB/s in 0s
2020-01-05 16:32:21 (47.0 MB/s) - ‘workspace-utils.py’ saved [1540/1540]
embedding is accepting 21388 and 200
training with cuda
Training for 20 epoch(s)...
Epoch: 1/20 Loss: 5.091635976314545
Epoch: 1/20 Loss: 4.522566963275274
Epoch: 2/20 Loss: 4.231029295146421
Epoch: 2/20 Loss: 4.051306582609812
Epoch: 3/20 Loss: 3.94787837312275
Epoch: 3/20 Loss: 3.851106468995412
Epoch: 4/20 Loss: 3.7886007534944013
Epoch: 4/20 Loss: 3.7275266621112824
Epoch: 5/20 Loss: 3.6795879172029347
Epoch: 5/20 Loss: 3.633143556038539
Epoch: 6/20 Loss: 3.5952494360692375
Epoch: 6/20 Loss: 3.556723170042038
Epoch: 7/20 Loss: 3.5290843500962454
Epoch: 7/20 Loss: 3.4936336399714154
Epoch: 8/20 Loss: 3.4777449038744153
Epoch: 8/20 Loss: 3.449163283665975
Epoch: 9/20 Loss: 3.439702339813913
Epoch: 9/20 Loss: 3.4093887383937838
Epoch: 10/20 Loss: 3.4012197300239846
Epoch: 10/20 Loss: 3.3731024016539255
Epoch: 11/20 Loss: 3.3698691648390526
Epoch: 11/20 Loss: 3.3411524329980216
Epoch: 12/20 Loss: 3.3388918782097474
Epoch: 12/20 Loss: 3.313706538279851
Epoch: 13/20 Loss: 3.314520687563169
Epoch: 13/20 Loss: 3.2904601511160534
Epoch: 14/20 Loss: 3.2924320061641126
Epoch: 14/20 Loss: 3.268146762688955
Epoch: 15/20 Loss: 3.2754495107247568
Epoch: 15/20 Loss: 3.254682934522629
Epoch: 16/20 Loss: 3.258460097001764
Epoch: 16/20 Loss: 3.233943171262741
Epoch: 17/20 Loss: 3.242279090218173
Epoch: 17/20 Loss: 3.2142629300753276
Epoch: 18/20 Loss: 3.2286074717178788
Epoch: 18/20 Loss: 3.202573269287745
Epoch: 19/20 Loss: 3.2156394934227737
Epoch: 19/20 Loss: 3.18821901456515
Epoch: 20/20 Loss: 3.1974074770864234
Epoch: 20/20 Loss: 3.1709482096036274
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** I tested many different configuration starting with a simpler models due to the limited computing resource for the vest convergence. I tried sequence length 8 to 21 and found that 21 has a fastest convergence. However, due to each model taking a very long time to train, more experiments will be required to find an optimal sequence length and other hyperparameters --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:50: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
import inspect
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
counts = Counter(text)
vocab = sorted(counts, key=counts.get, reverse=True)
int_to_vocab = {ii: word for ii, word in enumerate(vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
symbolDict = {
'.' : '<PERIOD>',
',' : '<COMMA>',
'"' : '<QUOTATION_MARK>',
';' : '<SEMICOLON>',
'!' : '<EXCLAMATION_MARK>',
'?' : '<QUESTION_MARK>',
'(' : '<LEFT_PAREN>',
')' : '<RIGHT_PAREN>',
'-' : '<DASH>',
'\n' : '<NEW_LINE>'}
return symbolDict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
n_batches = len(words)//batch_size
words = words[:n_batches*batch_size]
y_len = len(words) - sequence_length
x, y = [], []
for idx in range(0, y_len):
idx_end = sequence_length + idx
x_batch = words[idx:idx_end]
x.append(x_batch)
batch_y = words[idx_end]
y.append(batch_y)
feature_tensor = torch.from_numpy(np.asarray(x))
target_tensor = torch.from_numpy(np.asarray(y))
data = TensorDataset(feature_tensor, target_tensor)
data_loader = DataLoader(data, shuffle=True, batch_size=batch_size)
return data_loader
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[42, 43, 44, 45, 46],
[15, 16, 17, 18, 19],
[31, 32, 33, 34, 35],
[30, 31, 32, 33, 34],
[ 5, 6, 7, 8, 9],
[19, 20, 21, 22, 23],
[ 9, 10, 11, 12, 13],
[23, 24, 25, 26, 27],
[37, 38, 39, 40, 41],
[44, 45, 46, 47, 48]])
torch.Size([10])
tensor([47, 20, 36, 35, 10, 24, 14, 28, 42, 49])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,
dropout=dropout, batch_first=True)
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
batch_size = nn_input.size(0)
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
out = self.fc(lstm_out)
out = out.view(batch_size, -1, self.output_size)
out = out[:, -1]
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
if(train_on_gpu):
rnn.cuda()
h = tuple([each.data for each in hidden])
rnn.zero_grad()
if(train_on_gpu):
inputs, target = inp.cuda(), target.cuda()
output, h = rnn(inputs, h)
loss = criterion(output, target)
loss.backward()
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
# return the loss over a batch and the hidden state
optimizer.step()
return loss.item(), h
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 512
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 40
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 128
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 3
# Show stats for every n number of batches
show_every_n_batches = 100
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
# trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
# helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 40 epoch(s)...
Epoch: 1/40 Loss: 6.295405616760254
Epoch: 1/40 Loss: 5.80665018081665
Epoch: 1/40 Loss: 5.797196516990661
Epoch: 1/40 Loss: 5.7697900390625
Epoch: 1/40 Loss: 5.751341071128845
Epoch: 1/40 Loss: 5.7510121726989745
Epoch: 1/40 Loss: 5.707662725448609
Epoch: 1/40 Loss: 5.4311303663253785
Epoch: 1/40 Loss: 5.130197420120239
Epoch: 1/40 Loss: 4.972960519790649
Epoch: 1/40 Loss: 4.8824619817733765
Epoch: 1/40 Loss: 4.801440367698669
Epoch: 1/40 Loss: 4.735593161582947
Epoch: 1/40 Loss: 4.675324683189392
Epoch: 1/40 Loss: 4.631986718177796
Epoch: 1/40 Loss: 4.59436755657196
Epoch: 1/40 Loss: 4.550986733436584
Epoch: 2/40 Loss: 4.482700009718009
Epoch: 2/40 Loss: 4.443418326377869
Epoch: 2/40 Loss: 4.400814819335937
Epoch: 2/40 Loss: 4.424341266155243
Epoch: 2/40 Loss: 4.376003692150116
Epoch: 2/40 Loss: 4.369593114852905
Epoch: 2/40 Loss: 4.365958843231201
Epoch: 2/40 Loss: 4.3401699638366695
Epoch: 2/40 Loss: 4.336191778182983
Epoch: 2/40 Loss: 4.296621744632721
Epoch: 2/40 Loss: 4.2957856607437135
Epoch: 2/40 Loss: 4.276228833198547
Epoch: 2/40 Loss: 4.287097451686859
Epoch: 2/40 Loss: 4.243658084869384
Epoch: 2/40 Loss: 4.248096067905426
Epoch: 2/40 Loss: 4.238345925807953
Epoch: 2/40 Loss: 4.2611296534538265
Epoch: 3/40 Loss: 4.182126696228136
Epoch: 3/40 Loss: 4.1546294665336605
Epoch: 3/40 Loss: 4.12519321680069
Epoch: 3/40 Loss: 4.109879775047302
Epoch: 3/40 Loss: 4.137878057956695
Epoch: 3/40 Loss: 4.109914824962616
Epoch: 3/40 Loss: 4.130165166854859
Epoch: 3/40 Loss: 4.082552621364593
Epoch: 3/40 Loss: 4.107505435943604
Epoch: 3/40 Loss: 4.107112300395966
Epoch: 3/40 Loss: 4.1013430094718935
Epoch: 3/40 Loss: 4.074317052364349
Epoch: 3/40 Loss: 4.0794916081428525
Epoch: 3/40 Loss: 4.065399646759033
Epoch: 3/40 Loss: 4.053793625831604
Epoch: 3/40 Loss: 4.064567303657531
Epoch: 3/40 Loss: 4.083773424625397
Epoch: 4/40 Loss: 3.9909840019036693
Epoch: 4/40 Loss: 4.006628975868225
Epoch: 4/40 Loss: 3.963618371486664
Epoch: 4/40 Loss: 3.9872169971466063
Epoch: 4/40 Loss: 3.964105315208435
Epoch: 4/40 Loss: 3.935432105064392
Epoch: 4/40 Loss: 3.959433579444885
Epoch: 4/40 Loss: 3.9713794231414794
Epoch: 4/40 Loss: 3.9690206408500672
Epoch: 4/40 Loss: 3.9682229924201966
Epoch: 4/40 Loss: 3.9540217995643614
Epoch: 4/40 Loss: 3.971798229217529
Epoch: 4/40 Loss: 3.9671814918518065
Epoch: 4/40 Loss: 3.950990641117096
Epoch: 4/40 Loss: 3.9649457335472107
Epoch: 4/40 Loss: 3.9183714103698732
Epoch: 4/40 Loss: 3.932960946559906
Epoch: 5/40 Loss: 3.886957454343214
Epoch: 5/40 Loss: 3.8706419801712038
Epoch: 5/40 Loss: 3.871155545711517
Epoch: 5/40 Loss: 3.8602646589279175
Epoch: 5/40 Loss: 3.8603144598007204
Epoch: 5/40 Loss: 3.852443425655365
Epoch: 5/40 Loss: 3.8728827905654906
Epoch: 5/40 Loss: 3.884243245124817
Epoch: 5/40 Loss: 3.876310706138611
Epoch: 5/40 Loss: 3.840295882225037
Epoch: 5/40 Loss: 3.833985369205475
Epoch: 5/40 Loss: 3.868768584728241
Epoch: 5/40 Loss: 3.856507878303528
Epoch: 5/40 Loss: 3.870845365524292
Epoch: 5/40 Loss: 3.85084201335907
Epoch: 5/40 Loss: 3.880948007106781
Epoch: 5/40 Loss: 3.839629361629486
Epoch: 6/40 Loss: 3.7952801454151777
Epoch: 6/40 Loss: 3.791668162345886
Epoch: 6/40 Loss: 3.7634881639480593
Epoch: 6/40 Loss: 3.7595294761657714
Epoch: 6/40 Loss: 3.7662392234802247
Epoch: 6/40 Loss: 3.77467378616333
Epoch: 6/40 Loss: 3.775815877914429
Epoch: 6/40 Loss: 3.78679922580719
Epoch: 6/40 Loss: 3.7752767276763914
Epoch: 6/40 Loss: 3.777498424053192
Epoch: 6/40 Loss: 3.802544696331024
Epoch: 6/40 Loss: 3.7612729811668397
Epoch: 6/40 Loss: 3.7783097934722902
Epoch: 6/40 Loss: 3.8000766587257386
Epoch: 6/40 Loss: 3.7831038355827333
Epoch: 6/40 Loss: 3.757937984466553
Epoch: 6/40 Loss: 3.7868752789497377
Epoch: 7/40 Loss: 3.743608427385912
Epoch: 7/40 Loss: 3.705676968097687
Epoch: 7/40 Loss: 3.6995870661735535
Epoch: 7/40 Loss: 3.7004544353485107
Epoch: 7/40 Loss: 3.7161243748664856
Epoch: 7/40 Loss: 3.711156041622162
Epoch: 7/40 Loss: 3.721444709300995
Epoch: 7/40 Loss: 3.7119625425338745
Epoch: 7/40 Loss: 3.7012072134017946
Epoch: 7/40 Loss: 3.699277074337006
Epoch: 7/40 Loss: 3.701347050666809
Epoch: 7/40 Loss: 3.7014129090309145
Epoch: 7/40 Loss: 3.6882052850723266
Epoch: 7/40 Loss: 3.7217113471031187
Epoch: 7/40 Loss: 3.7116312742233277
Epoch: 7/40 Loss: 3.7142012405395506
Epoch: 7/40 Loss: 3.7124273347854615
Epoch: 8/40 Loss: 3.64532511454102
Epoch: 8/40 Loss: 3.6349867463111876
Epoch: 8/40 Loss: 3.628470141887665
Epoch: 8/40 Loss: 3.635446991920471
Epoch: 8/40 Loss: 3.616869878768921
Epoch: 8/40 Loss: 3.6518490934371948
Epoch: 8/40 Loss: 3.6229884028434753
Epoch: 8/40 Loss: 3.6644326877593993
Epoch: 8/40 Loss: 3.643743782043457
Epoch: 8/40 Loss: 3.632582881450653
Epoch: 8/40 Loss: 3.6628377771377565
Epoch: 8/40 Loss: 3.6410494661331176
Epoch: 8/40 Loss: 3.6723699021339415
Epoch: 8/40 Loss: 3.6630463075637816
Epoch: 8/40 Loss: 3.6748353147506716
Epoch: 8/40 Loss: 3.649612395763397
Epoch: 8/40 Loss: 3.677780649662018
Epoch: 9/40 Loss: 3.5939259241658745
Epoch: 9/40 Loss: 3.571683213710785
Epoch: 9/40 Loss: 3.572504801750183
Epoch: 9/40 Loss: 3.563976306915283
Epoch: 9/40 Loss: 3.585196068286896
Epoch: 9/40 Loss: 3.609772081375122
Epoch: 9/40 Loss: 3.591744203567505
Epoch: 9/40 Loss: 3.6093695425987242
Epoch: 9/40 Loss: 3.610701684951782
Epoch: 9/40 Loss: 3.5989301013946533
Epoch: 9/40 Loss: 3.579483392238617
Epoch: 9/40 Loss: 3.621468951702118
Epoch: 9/40 Loss: 3.596694543361664
Epoch: 9/40 Loss: 3.6036853432655334
Epoch: 9/40 Loss: 3.6263876819610594
Epoch: 9/40 Loss: 3.6100056648254393
Epoch: 9/40 Loss: 3.60446980714798
Epoch: 10/40 Loss: 3.546570563147254
Epoch: 10/40 Loss: 3.5193035292625425
Epoch: 10/40 Loss: 3.527236454486847
Epoch: 10/40 Loss: 3.5315622735023497
Epoch: 10/40 Loss: 3.5405736994743346
Epoch: 10/40 Loss: 3.5505992460250853
Epoch: 10/40 Loss: 3.540771176815033
Epoch: 10/40 Loss: 3.558017601966858
Epoch: 10/40 Loss: 3.5566860485076903
Epoch: 10/40 Loss: 3.5459209084510803
Epoch: 10/40 Loss: 3.5672666001319886
Epoch: 10/40 Loss: 3.5641508007049563
Epoch: 10/40 Loss: 3.5496798038482664
Epoch: 10/40 Loss: 3.574055378437042
Epoch: 10/40 Loss: 3.5617594385147093
Epoch: 10/40 Loss: 3.559128615856171
Epoch: 10/40 Loss: 3.5679479455947876
Epoch: 11/40 Loss: 3.506722019073811
Epoch: 11/40 Loss: 3.4964158725738526
Epoch: 11/40 Loss: 3.4934451007843017
Epoch: 11/40 Loss: 3.516865141391754
Epoch: 11/40 Loss: 3.504990770816803
Epoch: 11/40 Loss: 3.504933135509491
Epoch: 11/40 Loss: 3.5081810975074768
Epoch: 11/40 Loss: 3.5264525747299196
Epoch: 11/40 Loss: 3.5046090054512025
Epoch: 11/40 Loss: 3.5161074209213257
Epoch: 11/40 Loss: 3.527113344669342
Epoch: 11/40 Loss: 3.5243750500679014
Epoch: 11/40 Loss: 3.498313910961151
Epoch: 11/40 Loss: 3.525337574481964
Epoch: 11/40 Loss: 3.521401822566986
Epoch: 11/40 Loss: 3.5258352732658387
Epoch: 11/40 Loss: 3.527318332195282
Epoch: 12/40 Loss: 3.476656978011977
Epoch: 12/40 Loss: 3.450390686988831
Epoch: 12/40 Loss: 3.4417812037467956
Epoch: 12/40 Loss: 3.460411469936371
Epoch: 12/40 Loss: 3.4699028301239014
Epoch: 12/40 Loss: 3.4750412702560425
Epoch: 12/40 Loss: 3.472771079540253
Epoch: 12/40 Loss: 3.4660913157463074
Epoch: 12/40 Loss: 3.4755796074867247
Epoch: 12/40 Loss: 3.502383146286011
Epoch: 12/40 Loss: 3.486322202682495
Epoch: 12/40 Loss: 3.484312505722046
Epoch: 12/40 Loss: 3.497102077007294
Epoch: 12/40 Loss: 3.490264842510223
Epoch: 12/40 Loss: 3.4757595610618592
Epoch: 12/40 Loss: 3.502481999397278
Epoch: 12/40 Loss: 3.4794949007034304
Epoch: 13/40 Loss: 3.4322432051313685
Epoch: 13/40 Loss: 3.4397751021385194
Epoch: 13/40 Loss: 3.4296371936798096
Epoch: 13/40 Loss: 3.4359394812583925
Epoch: 13/40 Loss: 3.434664399623871
Epoch: 13/40 Loss: 3.422861933708191
Epoch: 13/40 Loss: 3.4323301005363462
Epoch: 13/40 Loss: 3.4305405473709105
Epoch: 13/40 Loss: 3.4646142148971557
Epoch: 13/40 Loss: 3.4647024202346803
Epoch: 13/40 Loss: 3.445142068862915
Epoch: 13/40 Loss: 3.4573641347885133
Epoch: 13/40 Loss: 3.465134286880493
Epoch: 13/40 Loss: 3.4545325469970702
Epoch: 13/40 Loss: 3.4514136958122252
Epoch: 13/40 Loss: 3.4467888617515565
Epoch: 13/40 Loss: 3.4437260174751283
Epoch: 14/40 Loss: 3.418311859699006
Epoch: 14/40 Loss: 3.368879249095917
Epoch: 14/40 Loss: 3.3830223059654236
Epoch: 14/40 Loss: 3.401392252445221
Epoch: 14/40 Loss: 3.3881922268867495
Epoch: 14/40 Loss: 3.399292998313904
Epoch: 14/40 Loss: 3.4179391717910765
Epoch: 14/40 Loss: 3.409709415435791
Epoch: 14/40 Loss: 3.418008463382721
Epoch: 14/40 Loss: 3.4281705379486085
Epoch: 14/40 Loss: 3.4242951536178587
Epoch: 14/40 Loss: 3.4236558198928835
Epoch: 14/40 Loss: 3.413107385635376
Epoch: 14/40 Loss: 3.4205154633522032
Epoch: 14/40 Loss: 3.431173324584961
Epoch: 14/40 Loss: 3.440660014152527
Epoch: 14/40 Loss: 3.4360440230369567
Epoch: 15/40 Loss: 3.3668645679527986
Epoch: 15/40 Loss: 3.353421995639801
Epoch: 15/40 Loss: 3.3431429624557496
Epoch: 15/40 Loss: 3.35803386926651
Epoch: 15/40 Loss: 3.3600885033607484
Epoch: 15/40 Loss: 3.3828885626792906
Epoch: 15/40 Loss: 3.3685464119911193
Epoch: 15/40 Loss: 3.3805907797813415
Epoch: 15/40 Loss: 3.386972198486328
Epoch: 15/40 Loss: 3.398211431503296
Epoch: 15/40 Loss: 3.407992901802063
Epoch: 15/40 Loss: 3.3943982195854185
Epoch: 15/40 Loss: 3.400393536090851
Epoch: 15/40 Loss: 3.4144240856170653
Epoch: 15/40 Loss: 3.4137556076049806
Epoch: 15/40 Loss: 3.4087609386444093
Epoch: 15/40 Loss: 3.425365447998047
Epoch: 16/40 Loss: 3.3526262442270913
Epoch: 16/40 Loss: 3.341873707771301
Epoch: 16/40 Loss: 3.335239999294281
Epoch: 16/40 Loss: 3.3401453590393064
Epoch: 16/40 Loss: 3.325375301837921
Epoch: 16/40 Loss: 3.353274827003479
Epoch: 16/40 Loss: 3.366937131881714
Epoch: 16/40 Loss: 3.3424195432662964
Epoch: 16/40 Loss: 3.3706952142715454
Epoch: 16/40 Loss: 3.3572868704795837
Epoch: 16/40 Loss: 3.369709508419037
Epoch: 16/40 Loss: 3.3803389120101928
Epoch: 16/40 Loss: 3.3692610931396483
Epoch: 16/40 Loss: 3.366413378715515
Epoch: 16/40 Loss: 3.3921719861030577
Epoch: 16/40 Loss: 3.386460587978363
Epoch: 16/40 Loss: 3.3752875924110413
Epoch: 17/40 Loss: 3.3144431114196777
Epoch: 17/40 Loss: 3.316742401123047
Epoch: 17/40 Loss: 3.3199937248229983
Epoch: 17/40 Loss: 3.3262393426895143
Epoch: 17/40 Loss: 3.3088027143478396
Epoch: 17/40 Loss: 3.316550068855286
Epoch: 17/40 Loss: 3.318686828613281
Epoch: 17/40 Loss: 3.3388957238197325
Epoch: 17/40 Loss: 3.32929988861084
Epoch: 17/40 Loss: 3.3600588726997374
Epoch: 17/40 Loss: 3.3361262679100037
Epoch: 17/40 Loss: 3.3680515551567076
Epoch: 17/40 Loss: 3.3566253852844237
Epoch: 17/40 Loss: 3.33636949300766
Epoch: 17/40 Loss: 3.3561578035354613
Epoch: 17/40 Loss: 3.371836874485016
Epoch: 17/40 Loss: 3.3510951161384583
Epoch: 18/40 Loss: 3.3190926947492234
Epoch: 18/40 Loss: 3.2852808141708376
Epoch: 18/40 Loss: 3.2837573146820067
Epoch: 18/40 Loss: 3.2768738412857057
Epoch: 18/40 Loss: 3.300302140712738
Epoch: 18/40 Loss: 3.2887814974784852
Epoch: 18/40 Loss: 3.329373047351837
Epoch: 18/40 Loss: 3.291519181728363
Epoch: 18/40 Loss: 3.329669234752655
Epoch: 18/40 Loss: 3.330455493927002
Epoch: 18/40 Loss: 3.3390631699562072
Epoch: 18/40 Loss: 3.332003128528595
Epoch: 18/40 Loss: 3.334246401786804
Epoch: 18/40 Loss: 3.3429106092453003
Epoch: 18/40 Loss: 3.33590163230896
Epoch: 18/40 Loss: 3.32009378194809
Epoch: 18/40 Loss: 3.332184455394745
Epoch: 19/40 Loss: 3.30308549793054
Epoch: 19/40 Loss: 3.2467750215530398
Epoch: 19/40 Loss: 3.2841764950752257
Epoch: 19/40 Loss: 3.26810542345047
Epoch: 19/40 Loss: 3.300795569419861
Epoch: 19/40 Loss: 3.2869703912734987
Epoch: 19/40 Loss: 3.287803626060486
Epoch: 19/40 Loss: 3.2797577238082884
Epoch: 19/40 Loss: 3.2960555076599123
Epoch: 19/40 Loss: 3.3062274289131164
Epoch: 19/40 Loss: 3.2897856307029723
Epoch: 19/40 Loss: 3.3016039061546327
Epoch: 19/40 Loss: 3.3236393976211547
Epoch: 19/40 Loss: 3.296950433254242
Epoch: 19/40 Loss: 3.315734574794769
Epoch: 19/40 Loss: 3.3063604927062986
Epoch: 19/40 Loss: 3.3109201407432556
Epoch: 20/40 Loss: 3.250730284562348
Epoch: 20/40 Loss: 3.234354331493378
Epoch: 20/40 Loss: 3.2678548550605773
Epoch: 20/40 Loss: 3.254397292137146
Epoch: 20/40 Loss: 3.265232512950897
Epoch: 20/40 Loss: 3.243854110240936
Epoch: 20/40 Loss: 3.264221966266632
Epoch: 20/40 Loss: 3.2638847398757935
Epoch: 20/40 Loss: 3.27991676568985
Epoch: 20/40 Loss: 3.283143947124481
Epoch: 20/40 Loss: 3.285353283882141
Epoch: 20/40 Loss: 3.278928484916687
Epoch: 20/40 Loss: 3.295120358467102
Epoch: 20/40 Loss: 3.2980913496017457
Epoch: 20/40 Loss: 3.3046781611442566
Epoch: 20/40 Loss: 3.3096171188354493
Epoch: 20/40 Loss: 3.3069554924964906
Epoch: 21/40 Loss: 3.244679498334303
Epoch: 21/40 Loss: 3.22762056350708
Epoch: 21/40 Loss: 3.2130341482162477
Epoch: 21/40 Loss: 3.257251238822937
Epoch: 21/40 Loss: 3.245500326156616
Epoch: 21/40 Loss: 3.248044800758362
Epoch: 21/40 Loss: 3.23320969581604
Epoch: 21/40 Loss: 3.2498206901550293
Epoch: 21/40 Loss: 3.25490740776062
Epoch: 21/40 Loss: 3.27536936044693
Epoch: 21/40 Loss: 3.273723158836365
Epoch: 21/40 Loss: 3.2595191645622252
Epoch: 21/40 Loss: 3.26969612121582
Epoch: 21/40 Loss: 3.2700145125389097
Epoch: 21/40 Loss: 3.2814616417884825
Epoch: 21/40 Loss: 3.284317648410797
Epoch: 21/40 Loss: 3.2850093412399293
Epoch: 22/40 Loss: 3.2248353687583977
Epoch: 22/40 Loss: 3.210673067569733
Epoch: 22/40 Loss: 3.219316337108612
Epoch: 22/40 Loss: 3.224736533164978
Epoch: 22/40 Loss: 3.20661607503891
Epoch: 22/40 Loss: 3.220269646644592
Epoch: 22/40 Loss: 3.239449326992035
Epoch: 22/40 Loss: 3.2214272451400756
Epoch: 22/40 Loss: 3.2332403659820557
Epoch: 22/40 Loss: 3.244079988002777
Epoch: 22/40 Loss: 3.26028422832489
Epoch: 22/40 Loss: 3.2579054737091067
Epoch: 22/40 Loss: 3.264564619064331
Epoch: 22/40 Loss: 3.2539597725868226
Epoch: 22/40 Loss: 3.250880994796753
Epoch: 22/40 Loss: 3.2485162568092347
Epoch: 22/40 Loss: 3.2736807656288147
Epoch: 23/40 Loss: 3.213079464351032
Epoch: 23/40 Loss: 3.1777949571609496
Epoch: 23/40 Loss: 3.1924854588508604
Epoch: 23/40 Loss: 3.2012481927871703
Epoch: 23/40 Loss: 3.210305972099304
Epoch: 23/40 Loss: 3.206139461994171
Epoch: 23/40 Loss: 3.2201529502868653
Epoch: 23/40 Loss: 3.2169682145118714
Epoch: 23/40 Loss: 3.23088502407074
Epoch: 23/40 Loss: 3.2479660868644715
Epoch: 23/40 Loss: 3.2469769072532655
Epoch: 23/40 Loss: 3.2408736181259155
Epoch: 23/40 Loss: 3.2281478953361513
Epoch: 23/40 Loss: 3.2334540367126463
Epoch: 23/40 Loss: 3.249402256011963
Epoch: 23/40 Loss: 3.247013552188873
Epoch: 23/40 Loss: 3.2362549924850463
Epoch: 24/40 Loss: 3.194632408466745
Epoch: 24/40 Loss: 3.1880836606025698
Epoch: 24/40 Loss: 3.173503193855286
Epoch: 24/40 Loss: 3.183918526172638
Epoch: 24/40 Loss: 3.20584801197052
Epoch: 24/40 Loss: 3.186205050945282
Epoch: 24/40 Loss: 3.1768306851387025
Epoch: 24/40 Loss: 3.216845338344574
Epoch: 24/40 Loss: 3.20932669878006
Epoch: 24/40 Loss: 3.2206866693496705
Epoch: 24/40 Loss: 3.210733377933502
Epoch: 24/40 Loss: 3.2292572045326233
Epoch: 24/40 Loss: 3.2176726198196413
Epoch: 24/40 Loss: 3.2254205560684204
Epoch: 24/40 Loss: 3.2309138321876527
Epoch: 24/40 Loss: 3.2119682121276854
Epoch: 24/40 Loss: 3.2441273593902586
Epoch: 25/40 Loss: 3.1714860801155687
Epoch: 25/40 Loss: 3.1597762441635133
Epoch: 25/40 Loss: 3.1504669404029846
Epoch: 25/40 Loss: 3.176701135635376
Epoch: 25/40 Loss: 3.1802252388000487
Epoch: 25/40 Loss: 3.189257822036743
Epoch: 25/40 Loss: 3.1836634182929995
Epoch: 25/40 Loss: 3.19689603805542
Epoch: 25/40 Loss: 3.2035111737251283
Epoch: 25/40 Loss: 3.1948340821266172
Epoch: 25/40 Loss: 3.1971490836143492
Epoch: 25/40 Loss: 3.197871162891388
Epoch: 25/40 Loss: 3.212331852912903
Epoch: 25/40 Loss: 3.216475923061371
Epoch: 25/40 Loss: 3.2174252033233643
Epoch: 25/40 Loss: 3.2194750118255615
Epoch: 25/40 Loss: 3.2181735324859617
Epoch: 26/40 Loss: 3.1584466653512724
Epoch: 26/40 Loss: 3.1307985043525695
Epoch: 26/40 Loss: 3.1511295938491823
Epoch: 26/40 Loss: 3.1561445498466494
Epoch: 26/40 Loss: 3.16102933883667
Epoch: 26/40 Loss: 3.193759162425995
Epoch: 26/40 Loss: 3.167753059864044
Epoch: 26/40 Loss: 3.166281361579895
Epoch: 26/40 Loss: 3.171961977481842
Epoch: 26/40 Loss: 3.204296953678131
Epoch: 26/40 Loss: 3.1796568131446836
Epoch: 26/40 Loss: 3.1957959032058714
Epoch: 26/40 Loss: 3.1802221965789794
Epoch: 26/40 Loss: 3.211048431396484
Epoch: 26/40 Loss: 3.204625527858734
Epoch: 26/40 Loss: 3.200158267021179
Epoch: 26/40 Loss: 3.218621687889099
Epoch: 27/40 Loss: 3.1347438054727323
Epoch: 27/40 Loss: 3.1277751326560974
Epoch: 27/40 Loss: 3.1390751576423646
Epoch: 27/40 Loss: 3.1386599230766294
Epoch: 27/40 Loss: 3.1473884105682375
Epoch: 27/40 Loss: 3.1754218530654907
Epoch: 27/40 Loss: 3.1531568813323974
Epoch: 27/40 Loss: 3.158843629360199
Epoch: 27/40 Loss: 3.161986927986145
Epoch: 27/40 Loss: 3.1675448203086853
Epoch: 27/40 Loss: 3.181237106323242
Epoch: 27/40 Loss: 3.174220552444458
Epoch: 27/40 Loss: 3.193573455810547
Epoch: 27/40 Loss: 3.199631321430206
Epoch: 27/40 Loss: 3.1966242575645447
Epoch: 27/40 Loss: 3.187302920818329
Epoch: 27/40 Loss: 3.179771430492401
Epoch: 28/40 Loss: 3.1253121491019606
Epoch: 28/40 Loss: 3.1078640604019165
Epoch: 28/40 Loss: 3.1200510835647584
Epoch: 28/40 Loss: 3.1352910447120665
Epoch: 28/40 Loss: 3.144666702747345
Epoch: 28/40 Loss: 3.148395698070526
Epoch: 28/40 Loss: 3.132585105895996
Epoch: 28/40 Loss: 3.1533688592910765
Epoch: 28/40 Loss: 3.176629545688629
Epoch: 28/40 Loss: 3.157125232219696
Epoch: 28/40 Loss: 3.136765356063843
Epoch: 28/40 Loss: 3.1768166399002076
Epoch: 28/40 Loss: 3.1646726822853086
Epoch: 28/40 Loss: 3.1582090616226197
Epoch: 28/40 Loss: 3.187243251800537
Epoch: 28/40 Loss: 3.198671405315399
Epoch: 28/40 Loss: 3.1810974311828613
Epoch: 29/40 Loss: 3.1224944304067193
Epoch: 29/40 Loss: 3.0980203342437744
Epoch: 29/40 Loss: 3.10988343000412
Epoch: 29/40 Loss: 3.1138494420051575
Epoch: 29/40 Loss: 3.1158336281776426
Epoch: 29/40 Loss: 3.1383388233184815
Epoch: 29/40 Loss: 3.1335808777809144
Epoch: 29/40 Loss: 3.1291400599479675
Epoch: 29/40 Loss: 3.1391796016693116
Epoch: 29/40 Loss: 3.1284532237052916
Epoch: 29/40 Loss: 3.1609995126724244
Epoch: 29/40 Loss: 3.173295159339905
Epoch: 29/40 Loss: 3.160419499874115
Epoch: 29/40 Loss: 3.1493277525901795
Epoch: 29/40 Loss: 3.1695111894607546
Epoch: 29/40 Loss: 3.178649597167969
Epoch: 29/40 Loss: 3.166031205654144
Epoch: 30/40 Loss: 3.1058018224459167
Epoch: 30/40 Loss: 3.1058628368377685
Epoch: 30/40 Loss: 3.103884320259094
Epoch: 30/40 Loss: 3.11926878452301
Epoch: 30/40 Loss: 3.1035250329971316
Epoch: 30/40 Loss: 3.1187919187545776
Epoch: 30/40 Loss: 3.127573127746582
Epoch: 30/40 Loss: 3.1092315220832827
Epoch: 30/40 Loss: 3.1376067280769346
Epoch: 30/40 Loss: 3.119958481788635
Epoch: 30/40 Loss: 3.1435403394699097
Epoch: 30/40 Loss: 3.1425402975082397
Epoch: 30/40 Loss: 3.1489040541648863
Epoch: 30/40 Loss: 3.1570923924446106
Epoch: 30/40 Loss: 3.164622006416321
Epoch: 30/40 Loss: 3.17054402589798
Epoch: 30/40 Loss: 3.1326127076148986
Epoch: 31/40 Loss: 3.0954199750372706
Epoch: 31/40 Loss: 3.0836986660957337
Epoch: 31/40 Loss: 3.1009942626953126
Epoch: 31/40 Loss: 3.0943004179000853
Epoch: 31/40 Loss: 3.105739517211914
Epoch: 31/40 Loss: 3.0955708718299864
Epoch: 31/40 Loss: 3.1216368436813355
Epoch: 31/40 Loss: 3.114381875991821
Epoch: 31/40 Loss: 3.103736016750336
Epoch: 31/40 Loss: 3.1147751355171205
Epoch: 31/40 Loss: 3.1347358417510987
Epoch: 31/40 Loss: 3.1213193845748903
Epoch: 31/40 Loss: 3.1320442962646484
Epoch: 31/40 Loss: 3.123191270828247
Epoch: 31/40 Loss: 3.1390389704704287
Epoch: 31/40 Loss: 3.161129581928253
Epoch: 31/40 Loss: 3.1685166549682617
Epoch: 32/40 Loss: 3.089466184589034
Epoch: 32/40 Loss: 3.0725114440917967
Epoch: 32/40 Loss: 3.083483488559723
Epoch: 32/40 Loss: 3.083820221424103
Epoch: 32/40 Loss: 3.0990681624412537
Epoch: 32/40 Loss: 3.0928625011444093
Epoch: 32/40 Loss: 3.095714304447174
Epoch: 32/40 Loss: 3.0866333127021788
Epoch: 32/40 Loss: 3.111944947242737
Epoch: 32/40 Loss: 3.090879271030426
Epoch: 32/40 Loss: 3.1115441870689393
Epoch: 32/40 Loss: 3.1197412753105165
Epoch: 32/40 Loss: 3.128634970188141
Epoch: 32/40 Loss: 3.1350068163871767
Epoch: 32/40 Loss: 3.1211376357078553
Epoch: 32/40 Loss: 3.1395605635643005
Epoch: 32/40 Loss: 3.1361198830604553
Epoch: 33/40 Loss: 3.086585545370765
Epoch: 33/40 Loss: 3.072260582447052
Epoch: 33/40 Loss: 3.0543723726272582
Epoch: 33/40 Loss: 3.0772309589385984
Epoch: 33/40 Loss: 3.081868813037872
Epoch: 33/40 Loss: 3.069645149707794
Epoch: 33/40 Loss: 3.0883045196533203
Epoch: 33/40 Loss: 3.0884637570381166
Epoch: 33/40 Loss: 3.066096429824829
Epoch: 33/40 Loss: 3.1002284812927248
Epoch: 33/40 Loss: 3.1117766952514647
Epoch: 33/40 Loss: 3.0956111335754395
Epoch: 33/40 Loss: 3.1122912979125976
Epoch: 33/40 Loss: 3.124182946681976
Epoch: 33/40 Loss: 3.119167468547821
Epoch: 33/40 Loss: 3.145171067714691
Epoch: 33/40 Loss: 3.132662398815155
Epoch: 34/40 Loss: 3.0590222676595054
Epoch: 34/40 Loss: 3.0496167302131654
Epoch: 34/40 Loss: 3.060490138530731
Epoch: 34/40 Loss: 3.066323843002319
Epoch: 34/40 Loss: 3.080504348278046
Epoch: 34/40 Loss: 3.077186484336853
Epoch: 34/40 Loss: 3.0733182311058043
Epoch: 34/40 Loss: 3.0917574262619016
Epoch: 34/40 Loss: 3.086994228363037
Epoch: 34/40 Loss: 3.1016873049736025
Epoch: 34/40 Loss: 3.0834479784965514
Epoch: 34/40 Loss: 3.1063265109062197
Epoch: 34/40 Loss: 3.085716173648834
Epoch: 34/40 Loss: 3.10706595659256
Epoch: 34/40 Loss: 3.0997568130493165
Epoch: 34/40 Loss: 3.101032466888428
Epoch: 34/40 Loss: 3.1156568002700804
Epoch: 35/40 Loss: 3.047261726771686
Epoch: 35/40 Loss: 3.0457681012153626
Epoch: 35/40 Loss: 3.022852544784546
Epoch: 35/40 Loss: 3.0514315795898437
Epoch: 35/40 Loss: 3.053847212791443
Epoch: 35/40 Loss: 3.06408616065979
Epoch: 35/40 Loss: 3.075976297855377
Epoch: 35/40 Loss: 3.0845091462135317
Epoch: 35/40 Loss: 3.073545527458191
Epoch: 35/40 Loss: 3.085630292892456
Epoch: 35/40 Loss: 3.06376736164093
Epoch: 35/40 Loss: 3.085243489742279
Epoch: 35/40 Loss: 3.0964259552955626
Epoch: 35/40 Loss: 3.094741454124451
Epoch: 35/40 Loss: 3.093990330696106
Epoch: 35/40 Loss: 3.1147731733322144
Epoch: 35/40 Loss: 3.1252348017692566
Epoch: 36/40 Loss: 3.0619314613071738
Epoch: 36/40 Loss: 3.043056778907776
Epoch: 36/40 Loss: 3.0420741748809816
Epoch: 36/40 Loss: 3.045200390815735
Epoch: 36/40 Loss: 3.016583435535431
Epoch: 36/40 Loss: 3.051652591228485
Epoch: 36/40 Loss: 3.048303759098053
Epoch: 36/40 Loss: 3.053185682296753
Epoch: 36/40 Loss: 3.044936332702637
Epoch: 36/40 Loss: 3.0767464208602906
Epoch: 36/40 Loss: 3.060809335708618
Epoch: 36/40 Loss: 3.0594623255729676
Epoch: 36/40 Loss: 3.075792133808136
Epoch: 36/40 Loss: 3.1087145352363588
Epoch: 36/40 Loss: 3.1005871272087098
Epoch: 36/40 Loss: 3.0985249614715578
Epoch: 36/40 Loss: 3.109562575817108
Epoch: 37/40 Loss: 3.039649987051673
Epoch: 37/40 Loss: 3.0196944093704223
Epoch: 37/40 Loss: 3.0304143905639647
Epoch: 37/40 Loss: 3.0496555590629577
Epoch: 37/40 Loss: 3.0501211714744567
Epoch: 37/40 Loss: 3.046262447834015
Epoch: 37/40 Loss: 3.039147734642029
Epoch: 37/40 Loss: 3.0401566457748412
Epoch: 37/40 Loss: 3.058485963344574
Epoch: 37/40 Loss: 3.0435601663589478
Epoch: 37/40 Loss: 3.0519022655487063
Epoch: 37/40 Loss: 3.076683769226074
Epoch: 37/40 Loss: 3.0770651078224183
Epoch: 37/40 Loss: 3.0736726331710815
Epoch: 37/40 Loss: 3.087305688858032
Epoch: 37/40 Loss: 3.0911040019989016
Epoch: 37/40 Loss: 3.09265513420105
Epoch: 38/40 Loss: 3.0268066453595535
Epoch: 38/40 Loss: 3.0059891390800475
Epoch: 38/40 Loss: 3.022848274707794
Epoch: 38/40 Loss: 3.047044599056244
Epoch: 38/40 Loss: 3.0246161103248594
Epoch: 38/40 Loss: 3.0512567067146303
Epoch: 38/40 Loss: 3.0230589652061464
Epoch: 38/40 Loss: 3.0349004793167116
Epoch: 38/40 Loss: 3.055660412311554
Epoch: 38/40 Loss: 3.0587748074531556
Epoch: 38/40 Loss: 3.053401825428009
Epoch: 38/40 Loss: 3.050684370994568
Epoch: 38/40 Loss: 3.065198554992676
Epoch: 38/40 Loss: 3.070618669986725
Epoch: 38/40 Loss: 3.0634470272064207
Epoch: 38/40 Loss: 3.0640885925292967
Epoch: 38/40 Loss: 3.0807826352119445
Epoch: 39/40 Loss: 3.021529882512194
Epoch: 39/40 Loss: 2.992088301181793
Epoch: 39/40 Loss: 3.011819338798523
Epoch: 39/40 Loss: 3.009713976383209
Epoch: 39/40 Loss: 3.0200268173217775
Epoch: 39/40 Loss: 3.0201913595199583
Epoch: 39/40 Loss: 3.0382388114929197
Epoch: 39/40 Loss: 3.024594168663025
Epoch: 39/40 Loss: 3.0410870265960694
Epoch: 39/40 Loss: 3.043372585773468
Epoch: 39/40 Loss: 3.065853326320648
Epoch: 39/40 Loss: 3.0604931783676146
Epoch: 39/40 Loss: 3.0662604355812073
Epoch: 39/40 Loss: 3.0718659567832947
Epoch: 39/40 Loss: 3.0753650856018067
Epoch: 39/40 Loss: 3.0663579082489014
Epoch: 39/40 Loss: 3.06232497215271
Epoch: 40/40 Loss: 3.004919931398216
Epoch: 40/40 Loss: 2.978828547000885
Epoch: 40/40 Loss: 3.0072986698150634
Epoch: 40/40 Loss: 3.0013407492637634
Epoch: 40/40 Loss: 3.0154279041290284
Epoch: 40/40 Loss: 3.0168230056762697
Epoch: 40/40 Loss: 3.0233750534057617
Epoch: 40/40 Loss: 3.02440927028656
Epoch: 40/40 Loss: 3.019312882423401
Epoch: 40/40 Loss: 3.0260395884513853
Epoch: 40/40 Loss: 3.0328040409088133
Epoch: 40/40 Loss: 3.0498186683654787
Epoch: 40/40 Loss: 3.057143015861511
Epoch: 40/40 Loss: 3.056639790534973
Epoch: 40/40 Loss: 3.070470190048218
Epoch: 40/40 Loss: 3.062474200725555
Epoch: 40/40 Loss: 3.0681377267837524
Model Trained and Saved
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** I looked at the examples in the RNN lesson for most of my parameters. For the sequence length, I originally used 100 but noticed that it was taking far too long to train, so I scaled it down to 10. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
# TypeError: can't convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
# current_seq = current_seq.cpu()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
_____no_output_____
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
_____no_output_____
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
# return tuple
return (None, None)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
_____no_output_____
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
_____no_output_____
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# return a dataloader
return None
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
_____no_output_____
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
# define model layers
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# return one batch of output word scores and the hidden state
return None, None
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
_____no_output_____
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
# perform backpropagation and optimization
# return the loss over a batch and the hidden state produced by our model
return None, None
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
_____no_output_____
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = # of words in a sequence
# Batch Size
batch_size =
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs =
# Learning Rate
learning_rate =
# Model parameters
# Vocab size
vocab_size =
# Output size
output_size =
# Embedding Dimension
embedding_dim =
# Hidden Dimension
hidden_dim =
# Number of RNN Layers
n_layers =
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
_____no_output_____
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
_____no_output_____
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
# counter = Counter(text)
vocab_to_int = {w: idx for idx, w in enumerate(set(text))}
int_to_vocab = {idx: w for w, idx in vocab_to_int.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
lookup = {
".": "period",
",": "comma",
'"': "quotation",
";": "semicolon",
"!": "exclame",
"?": "question",
"(": "l_paren",
")": "r_paren",
"-": "dash",
"\n": "return"
}
return {k: f"||{v}||" for k, v in lookup.items()}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
n_batches = int(len(words) / batch_size)
words = words[:n_batches*batch_size]
y_len = len(words) - sequence_length
features = []
targets = []
for idx in range(0, y_len):
idx_end = sequence_length + idx
features.append(words[idx:idx_end])
targets.append(words[idx_end])
# create Tensor datasets
features = torch.from_numpy(np.asarray(features)).to(torch.int64)
targets = torch.from_numpy(np.asarray(targets)).to(torch.int64)
data = TensorDataset(features, targets)
data_loader = DataLoader(data, shuffle=False, batch_size=batch_size)
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]])
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,
dropout=dropout, batch_first=True)
self.dropout = nn.Dropout(dropout)
self.fc = nn.Linear(hidden_dim, output_size)
# self.sig = nn.Sigmoid()
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
nn_input = nn_input.long()
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
out = self.dropout(lstm_out)
out = self.fc(out)
# out = self.sig(out)
out = out.view(batch_size, -1, self.output_size)
out = out[:, -1]
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if(train_on_gpu):
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
hidden = tuple([h.data for h in hidden])
rnn.zero_grad()
# get the output from the model
output, hidden = rnn(inp, hidden)
# calculate the loss and perform backprop
loss = criterion(output, target)
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# commenting out. This tests library doesn't function properly on Windows
# tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
_____no_output_____
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 20 # of words in a sequence
# Batch Size
batch_size = 256
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 16
# Learning Rate
learning_rate = .001
# Model parameters
# Vocab size
vocab_size = len(int_to_vocab)
print(f"Vocab Size: {vocab_size}")
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 256
# Hidden Dimension
hidden_dim = 512
print(f"Hidden Size {hidden_dim}")
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
Vocab Size: 21388
Hidden Size 512
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 16 epoch(s)...
Epoch: 1/16 Loss: 5.455310563564301
Epoch: 1/16 Loss: 4.772419750213623
Epoch: 1/16 Loss: 4.716299479007721
Epoch: 1/16 Loss: 4.552140981197357
Epoch: 1/16 Loss: 4.434564594268799
Epoch: 1/16 Loss: 4.512522238254547
Epoch: 2/16 Loss: 4.353542388273765
Epoch: 2/16 Loss: 4.112563259601593
Epoch: 2/16 Loss: 4.209814462661743
Epoch: 2/16 Loss: 4.132946473121643
Epoch: 2/16 Loss: 4.082923572540283
Epoch: 2/16 Loss: 4.201064155101776
Epoch: 3/16 Loss: 4.1108441537176
Epoch: 3/16 Loss: 3.927784694671631
Epoch: 3/16 Loss: 4.032104474544525
Epoch: 3/16 Loss: 3.9623236584663393
Epoch: 3/16 Loss: 3.9091508259773255
Epoch: 3/16 Loss: 4.037691452503204
Epoch: 4/16 Loss: 3.9651220259564575
Epoch: 4/16 Loss: 3.8103473734855653
Epoch: 4/16 Loss: 3.9093316793441772
Epoch: 4/16 Loss: 3.853048634529114
Epoch: 4/16 Loss: 3.798801118373871
Epoch: 4/16 Loss: 3.926910074710846
Epoch: 5/16 Loss: 3.862949095810418
Epoch: 5/16 Loss: 3.713429450035095
Epoch: 5/16 Loss: 3.817374423980713
Epoch: 5/16 Loss: 3.7699388189315797
Epoch: 5/16 Loss: 3.7059623708724976
Epoch: 5/16 Loss: 3.840308710575104
Epoch: 6/16 Loss: 3.7823584436278863
Epoch: 6/16 Loss: 3.642212465763092
Epoch: 6/16 Loss: 3.7474600033760073
Epoch: 6/16 Loss: 3.6987796382904055
Epoch: 6/16 Loss: 3.6360070123672483
Epoch: 6/16 Loss: 3.7746032824516296
Epoch: 7/16 Loss: 3.7149494176963582
Epoch: 7/16 Loss: 3.5838490104675294
Epoch: 7/16 Loss: 3.6858344054222107
Epoch: 7/16 Loss: 3.6417576298713685
Epoch: 7/16 Loss: 3.5735020847320556
Epoch: 7/16 Loss: 3.7180045647621154
Epoch: 8/16 Loss: 3.6615983006427877
Epoch: 8/16 Loss: 3.534547165393829
Epoch: 8/16 Loss: 3.6369629769325256
Epoch: 8/16 Loss: 3.588692858219147
Epoch: 8/16 Loss: 3.5269871506690977
Epoch: 8/16 Loss: 3.6620579299926757
Epoch: 9/16 Loss: 3.614403916278424
Epoch: 9/16 Loss: 3.492366225242615
Epoch: 9/16 Loss: 3.5886473517417907
Epoch: 9/16 Loss: 3.5463910126686096
Epoch: 9/16 Loss: 3.481283280849457
Epoch: 9/16 Loss: 3.6196661958694456
Epoch: 10/16 Loss: 3.5720256781650828
Epoch: 10/16 Loss: 3.46034694480896
Epoch: 10/16 Loss: 3.5504080929756165
Epoch: 10/16 Loss: 3.5089728603363035
Epoch: 10/16 Loss: 3.444026230335236
Epoch: 10/16 Loss: 3.5788377642631533
Epoch: 11/16 Loss: 3.534366174971463
Epoch: 11/16 Loss: 3.4309493017196657
Epoch: 11/16 Loss: 3.5150436358451844
Epoch: 11/16 Loss: 3.4690163083076477
Epoch: 11/16 Loss: 3.409424701690674
Epoch: 11/16 Loss: 3.547413890361786
Epoch: 12/16 Loss: 3.500821440532758
Epoch: 12/16 Loss: 3.3976372637748717
Epoch: 12/16 Loss: 3.4833766541481017
Epoch: 12/16 Loss: 3.4366101541519165
Epoch: 12/16 Loss: 3.3768665647506713
Epoch: 12/16 Loss: 3.511057852268219
Epoch: 13/16 Loss: 3.4743638674094743
Epoch: 13/16 Loss: 3.369061732292175
Epoch: 13/16 Loss: 3.4509952960014343
Epoch: 13/16 Loss: 3.4072359261512757
Epoch: 13/16 Loss: 3.358080631732941
Epoch: 13/16 Loss: 3.4801366052627563
Epoch: 14/16 Loss: 3.4446755556706132
Epoch: 14/16 Loss: 3.3490402779579163
Epoch: 14/16 Loss: 3.4264907069206236
Epoch: 14/16 Loss: 3.38200022649765
Epoch: 14/16 Loss: 3.3263862652778626
Epoch: 14/16 Loss: 3.451665801525116
Epoch: 15/16 Loss: 3.4183051986345077
Epoch: 15/16 Loss: 3.323824038028717
Epoch: 15/16 Loss: 3.405555274963379
Epoch: 15/16 Loss: 3.3591946597099303
Epoch: 15/16 Loss: 3.3014762415885923
Epoch: 15/16 Loss: 3.4289697074890135
Epoch: 16/16 Loss: 3.3959099176820153
Epoch: 16/16 Loss: 3.3018528842926025
Epoch: 16/16 Loss: 3.383438117027283
Epoch: 16/16 Loss: 3.3385923748016357
Epoch: 16/16 Loss: 3.279401508808136
Epoch: 16/16 Loss: 3.4071180233955385
Model Trained and Saved
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** Shifted the sequence length until the loss started lower than 6 and did not reach a minima before 3.Started with the previously used n_layers and dim sizes then shifted by multiples of 8 and differend embedded to hidden_dim until a correct combination was reached.Interesting note: not using a Counter for encryption worked better than the cournter version. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
jerry: and the stare.
jerry: oh, yeah, i guess.
jerry: so, you want to talk.
george: well, it's a lot of pressure.
kruger:(to jerry) oh!
jerry: oh, you know, i have no idea who the hell is that we have.
kramer: oh, no. it's not the one--
kramer:(leaving) hey, hey, i have a great idea.
george: you know, it's just a little bit of a woman.
jerry: i think we could go down there and talk to him about this.
elaine: oh.....
george:(to george) what?
jerry: what?
elaine:(confused) i don't know how to do this.
george: what?
jerry: i don't care for the rest of the life.
kramer: well, i can't find my parents, i know what you want to do.
kramer:(entering monk's, then yelling to the door) hey, i have a lot of thinking to do.
jerry:(sarcastic) i don't know how you got it.
kramer: hey, i have to talk about you. i have a good time to see the other time.
kramer: oh, no--
kramer: yeah! i don't know.
jerry: what?
jerry: well, i was in my house! i mean, if i had a good time for you, i got it. i'm a little nervous about it.
elaine: what do you think?
george: i know.
jerry: oh, i know. i think it's just an odd.
george: what?
jerry: you know.., i don't want to see you again for a second. i can't believe it.
kramer: oh, yeah! yeah, it's a good samaritan trial...
kruger:(sarcastic) yeah?
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 200 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
jerry: than hazel?
george: yeah. well, i don't think so.
elaine: you know, i don't even know why. you know what i think of it is? what is this?
jerry: i know. i just remembered. i know what i do.
jerry: you know what i mean, because i was hoping of gay.
kramer:(to the phone) hey, what are ya.
jerry: hi.
elaine: hi.(to george) hey, what did she say?
george: because of course. i mean, i mean you know that i have a little bit.
jerry:(still trying to get a menu) well, you should be a pirate.(to jerry) so you can take a look like idiots, or whatever you want to see the truth.
jerry:(to kramer) i don't want it. i can't believe you got that. you know i think it's not fair.
jerry: oh
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
_____no_output_____
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
# return tuple
return (None, None)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
_____no_output_____
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
_____no_output_____
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# return a dataloader
return None
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
_____no_output_____
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
# define model layers
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# return one batch of output word scores and the hidden state
return None, None
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
_____no_output_____
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
# perform backpropagation and optimization
# return the loss over a batch and the hidden state produced by our model
return None, None
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
_____no_output_____
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = # of words in a sequence
# Batch Size
batch_size =
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs =
# Learning Rate
learning_rate =
# Model parameters
# Vocab size
vocab_size =
# Output size
output_size =
# Embedding Dimension
embedding_dim =
# Hidden Dimension
hidden_dim =
# Number of RNN Layers
n_layers =
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
_____no_output_____
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
_____no_output_____
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
counts = Counter(text)
vocab = sorted(counts, key=counts.get, reverse=True)
int_to_vocab = {ii: word for ii, word in enumerate(vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
dict_token = {'.': '||period||',
',': '||comma||',
'"': '||quotation_mark||',
';': '||semicolon||',
'!': '||exclamation_mark||',
'?': '||question_mark||',
'(': '||left_parentheses||',
')': '||right_parentheses||',
'-': '||dash||',
'\n': '||return||',
}
return dict_token
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
from torch import Tensor
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
feature, target = [], []
for idx in range(len(words)-sequence_length-1):
feature.append(words[idx:(idx + sequence_length)])
target.append(words[idx+sequence_length])
feature = torch.LongTensor(feature)
target = torch.LongTensor(target)
data = TensorDataset(feature, target)
data_loader = DataLoader(data, shuffle=True, batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 1, 2, 3, 4, 5],
[ 26, 27, 28, 29, 30],
[ 0, 1, 2, 3, 4],
[ 37, 38, 39, 40, 41],
[ 18, 19, 20, 21, 22],
[ 27, 28, 29, 30, 31],
[ 32, 33, 34, 35, 36],
[ 16, 17, 18, 19, 20],
[ 39, 40, 41, 42, 43],
[ 14, 15, 16, 17, 18]])
torch.Size([10])
tensor([ 6, 31, 5, 42, 23, 32, 37, 21, 44, 19])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,
dropout=dropout, batch_first=True)
self.dropout = nn.Dropout(0.5)
self.fc = nn.Linear(hidden_dim, output_size)
self.sig = nn.Sigmoid()
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
out = lstm_out.contiguous().view(-1, self.hidden_dim)
#out = self.dropout(lstm_out)
out = self.fc(out)
# reshape into (batch_size, seq_length, output_size)
out = out.view(batch_size, -1, self.output_size)
# get last batch
out = out[:, -1]
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if (train_on_gpu):
rnn.cuda()
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
h = tuple([each.data for each in hidden])
rnn.zero_grad()
output, h = rnn(inp, h)
#print(inp.size(),output.squeeze().size(), target.long().size())
loss = criterion(output.squeeze(), target.long())
loss.backward(retain_graph=True)
optimizer.step()
loss = float(loss)
# return the loss over a batch and the hidden state produced by our model
return loss, h
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
#print(n_batches, batch_i)
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 100
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)+1
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 256
# Hidden Dimensio
hidden_dim = 128
# Number of RNN Layers
n_layers = 1
# Show stats for ery n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
from workspace_utils import active_session
"""
DON'T MODIFY ATHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
with active_session():
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/rnn.py:38: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.5 and num_layers=1
"num_layers={}".format(dropout, num_layers))
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here)I choosed 10 for sequence lengh, since each script (sentence) is not so long.Then, I tested weith following parameters with 0.001 learning rate, 100 batch size.layer:1, hidden_dim:128, Embed:256, loss at each steps during 3 epoch: (4.02, 3.82, 3.64)layer:2, hidden_dim:128, Embed:256, loss at each steps during 3 epoch: (4.23, 4.03, 3.92)layer:2, hidden_dim:256, Embed:256, loss at each steps during 3 epoch: (4.18, 3.89, 3.79)I choosed 1st one, since it has the lowest loss and the smallest size.Finally, after 10 epoch training, the loss 3.13 (the best value is 2.96) is and it meets the criteria 3.5 --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:42: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
import sys
try:
import torch
except:
import os
os.environ['TCMALLOC_LARGE_ALLOC_REPORT_THRESHOLD']='2000000000'
# http://pytorch.org/
from os.path import exists
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/'
accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu'
!{sys.executable} -m pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl torchvision >/dev/null
! curl -s https://codeload.github.com/udacity/deep-learning-v2-pytorch/tar.gz/master | tar -xz --strip=2 deep-learning-v2-pytorch-master/project-tv-script-generation/data >/dev/null 2>&1
! wget https://raw.githubusercontent.com/udacity/deep-learning-v2-pytorch/master/project-tv-script-generation/helper.py >/dev/null 2>&1
! wget https://raw.githubusercontent.com/udacity/deep-learning-v2-pytorch/master/project-tv-script-generation/problem_unittests.py >/dev/null 2>&1
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
_____no_output_____
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
# return tuple
return (None, None)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
_____no_output_____
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
_____no_output_____
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# return a dataloader
return None
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
_____no_output_____
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
# define model layers
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# return one batch of output word scores and the hidden state
return None, None
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
_____no_output_____
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
# perform backpropagation and optimization
# return the loss over a batch and the hidden state produced by our model
return None, None
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
_____no_output_____
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = # of words in a sequence
# Batch Size
batch_size =
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs =
# Learning Rate
learning_rate =
# Model parameters
# Vocab size
vocab_size =
# Output size
output_size =
# Embedding Dimension
embedding_dim =
# Hidden Dimension
hidden_dim =
# Number of RNN Layers
n_layers =
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
_____no_output_____
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
_____no_output_____
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
The TV Script is Not PerfectIt's ok if the TV script doesn't make perfect sense. It should look like alternating lines of dialogue, here is one such example of a few generated lines. Example generated script>jerry: what about me?>>jerry: i don't have to wait.>>kramer:(to the sales table)>>elaine:(to jerry) hey, look at this, i'm a good doctor.>>newman:(to elaine) you think i have no idea of this...>>elaine: oh, you better take the phone, and he was a little nervous.>>kramer:(to the phone) hey, hey, jerry, i don't want to be a little bit.(to kramer and jerry) you can't.>>jerry: oh, yeah. i don't even know, i know.>>jerry:(to the phone) oh, i know.>>kramer:(laughing) you know...(to jerry) you don't know.You can see that there are multiple characters that say (somewhat) complete sentences, but it doesn't have to be perfect! It takes quite a while to get good results, and often, you'll have to use a smaller vocabulary (and discard uncommon words), or get more data. The Seinfeld dataset is about 3.4 MB, which is big enough for our purposes; for script generation you'll want more than 1 MB of text, generally. Submitting This ProjectWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_tv_script_generation.ipynb" and save another copy as an HTML file by clicking "File" -> "Download as.."->"html". Include the "helper.py" and "problem_unittests.py" files in your submission. Once you download these files, compress them into one zip file for submission.
###Code
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
import re
from string import punctuation
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
vocabulary = Counter(text)
int_to_vocab = {i : word for i, word in enumerate(sorted(vocabulary, key=vocabulary.get, reverse=True))}
vocab_to_int = {word : i for i, word in int_to_vocab.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return {
'.': '||period||',
',': '||comma||',
'"': '||quotation||',
';': '||semicolon||',
'!': '||exclamation||',
'?': '||question||',
'(': '||left_parenthesis||',
')': '||right_parenthesis||',
'-': '||dash||',
'\n': '||newline||'
}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
num_batches = (len(words) - sequence_length) // batch_size
features = []
targets = []
for i in range(num_batches * batch_size):
features.append(words[i:i+sequence_length])
targets.append(words[i+sequence_length])
dataset = TensorDataset(torch.LongTensor(features), torch.LongTensor(targets))
# return a dataloader
return DataLoader(dataset, batch_size=batch_size)
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
print([word for word in range(15)])
for data in batch_data(range(15), 3, 4):
print(data)
###Output
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]
[tensor([[0, 1, 2],
[1, 2, 3],
[2, 3, 4],
[3, 4, 5]]), tensor([3, 4, 5, 6])]
[tensor([[4, 5, 6],
[5, 6, 7],
[6, 7, 8],
[7, 8, 9]]), tensor([ 7, 8, 9, 10])]
[tensor([[ 8, 9, 10],
[ 9, 10, 11],
[10, 11, 12],
[11, 12, 13]]), tensor([11, 12, 13, 14])]
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]])
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.hidden_dim = hidden_dim
self.num_layers = n_layers
self.output_size = output_size
# define model layers
self.embed_input = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, batch_first=True)
self.dropout = nn.Dropout(dropout)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.shape[0]
x = self.embed_input(nn_input)
x, hidden = self.lstm(x, hidden)
x = x.contiguous().view(-1, self.hidden_dim)
# x = self.dropout(x)
x = self.fc(x)
x = x.view(batch_size, -1, self.output_size)
# return one batch of output word scores and the hidden state
return x[:, -1], hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
new_weights = []
# Implement function
for name, parameter in self.named_parameters():
if name.startswith('lstm.weight_ih'):
new_weight = parameter.data.new(self.num_layers, batch_size, self.hidden_dim).zero_()
if train_on_gpu:
new_weight = new_weight.cuda()
new_weights += [new_weight]
# initialize hidden state with zero weights, and move to GPU if available
return tuple(new_weights)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if train_on_gpu:
inp, target = inp.cuda(), target.cuda()
rnn.zero_grad()
hidden = tuple([each.data for each in hidden])
output, hidden_out = rnn(inp, hidden)
# perform backpropagation and optimization
loss = criterion(output, target)
loss.backward()
# nn.utils.clip_grad_norm_(rnn.parameters(), 1)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden_out
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 15 # of words in a sequence
# Batch Size
batch_size = 30
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 60
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = len(vocab_to_int)
# Embedding Dimension
embedding_dim = 300
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 60 epoch(s)...
Epoch: 1/60 Loss: 5.584539404392243
Epoch: 1/60 Loss: 4.978618786811829
Epoch: 1/60 Loss: 4.743197936058045
Epoch: 1/60 Loss: 4.7408830742836
Epoch: 1/60 Loss: 4.796560604572296
Epoch: 1/60 Loss: 4.691383345127106
Epoch: 1/60 Loss: 4.620989734649658
Epoch: 1/60 Loss: 4.5035069065094
Epoch: 1/60 Loss: 4.2975137164592745
Epoch: 1/60 Loss: 4.580089133262634
Epoch: 1/60 Loss: 4.363871543884278
Epoch: 1/60 Loss: 4.470418291568756
Epoch: 1/60 Loss: 4.186340519428253
Epoch: 1/60 Loss: 4.282206547737122
Epoch: 1/60 Loss: 4.192819122314453
Epoch: 1/60 Loss: 4.33475363445282
Epoch: 1/60 Loss: 4.294622162818909
Epoch: 1/60 Loss: 4.451160747051239
Epoch: 1/60 Loss: 4.440550621509552
Epoch: 1/60 Loss: 4.313336195468903
Epoch: 1/60 Loss: 4.309487774848938
Epoch: 1/60 Loss: 4.205012201547623
Epoch: 1/60 Loss: 4.502835605621338
Epoch: 1/60 Loss: 4.315154174804688
Epoch: 1/60 Loss: 4.477178359031678
Epoch: 1/60 Loss: 4.474082582473755
Epoch: 1/60 Loss: 4.332868708610535
Epoch: 1/60 Loss: 4.454158801078797
Epoch: 1/60 Loss: 4.333987774848938
Epoch: 1/60 Loss: 4.245570753097534
Epoch: 1/60 Loss: 4.191190000057221
Epoch: 1/60 Loss: 4.500000421524048
Epoch: 1/60 Loss: 4.312783335208893
Epoch: 1/60 Loss: 4.0926206276416774
Epoch: 1/60 Loss: 4.1326706950664525
Epoch: 1/60 Loss: 4.312404307842255
Epoch: 1/60 Loss: 4.12882629776001
Epoch: 1/60 Loss: 4.399092391729355
Epoch: 1/60 Loss: 4.264918925523758
Epoch: 1/60 Loss: 4.217979504585266
Epoch: 1/60 Loss: 4.194918124675751
Epoch: 1/60 Loss: 4.245503536701202
Epoch: 1/60 Loss: 4.207558061599731
Epoch: 1/60 Loss: 4.386960781574249
Epoch: 1/60 Loss: 4.240934824943542
Epoch: 1/60 Loss: 4.349321382522583
Epoch: 1/60 Loss: 4.556552526950836
Epoch: 1/60 Loss: 4.565672998428345
Epoch: 1/60 Loss: 4.419836859226227
Epoch: 1/60 Loss: 4.430744566679001
Epoch: 1/60 Loss: 4.491157646656037
Epoch: 1/60 Loss: 4.461477841854095
Epoch: 1/60 Loss: 4.483451738595963
Epoch: 1/60 Loss: 4.417303246498108
Epoch: 1/60 Loss: 4.428138305187225
Epoch: 1/60 Loss: 4.4382841343879695
Epoch: 1/60 Loss: 4.290555070877075
Epoch: 1/60 Loss: 4.510940369129181
Epoch: 1/60 Loss: 4.262174345254898
Epoch: 2/60 Loss: 4.429508287945519
Epoch: 2/60 Loss: 4.0317278101444245
Epoch: 2/60 Loss: 4.030819558143616
Epoch: 2/60 Loss: 4.06558516073227
Epoch: 2/60 Loss: 4.132909673452377
Epoch: 2/60 Loss: 4.153683086156845
Epoch: 2/60 Loss: 4.111942491531372
Epoch: 2/60 Loss: 4.052715909957886
Epoch: 2/60 Loss: 3.874140060424805
Epoch: 2/60 Loss: 4.1939811913967135
Epoch: 2/60 Loss: 3.990584184885025
Epoch: 2/60 Loss: 4.126995318889618
Epoch: 2/60 Loss: 3.840491840362549
Epoch: 2/60 Loss: 3.9559146318435667
Epoch: 2/60 Loss: 3.8551442093849184
Epoch: 2/60 Loss: 4.006331790208817
Epoch: 2/60 Loss: 4.020254987716675
Epoch: 2/60 Loss: 4.228961871147156
Epoch: 2/60 Loss: 4.141343107700348
Epoch: 2/60 Loss: 4.0554907076358795
Epoch: 2/60 Loss: 4.055373066902161
Epoch: 2/60 Loss: 3.97321329498291
Epoch: 2/60 Loss: 4.298759968757629
Epoch: 2/60 Loss: 4.109160690307617
Epoch: 2/60 Loss: 4.242182459831238
Epoch: 2/60 Loss: 4.264491364717483
Epoch: 2/60 Loss: 4.132045463562012
Epoch: 2/60 Loss: 4.248043483734131
Epoch: 2/60 Loss: 4.1306860065460205
Epoch: 2/60 Loss: 4.060737189769745
Epoch: 2/60 Loss: 3.9417130317687987
Epoch: 2/60 Loss: 4.29304283285141
Epoch: 2/60 Loss: 4.137231809854508
Epoch: 2/60 Loss: 3.9067204978466035
Epoch: 2/60 Loss: 3.956833998441696
Epoch: 2/60 Loss: 4.113496272802353
Epoch: 2/60 Loss: 3.972088612794876
Epoch: 2/60 Loss: 4.238672315597534
Epoch: 2/60 Loss: 4.08638326048851
Epoch: 2/60 Loss: 4.041441945791244
Epoch: 2/60 Loss: 4.041833787918091
Epoch: 2/60 Loss: 4.058106949567795
Epoch: 2/60 Loss: 4.092928634643554
Epoch: 2/60 Loss: 4.200017159461975
Epoch: 2/60 Loss: 4.119115034103394
Epoch: 2/60 Loss: 4.187801851272583
Epoch: 2/60 Loss: 4.41037546133995
Epoch: 2/60 Loss: 4.3907617893219
Epoch: 2/60 Loss: 4.24628427362442
Epoch: 2/60 Loss: 4.246769404649735
Epoch: 2/60 Loss: 4.2671496539115905
Epoch: 2/60 Loss: 4.289663331031799
Epoch: 2/60 Loss: 4.297907082319259
Epoch: 2/60 Loss: 4.276141669273376
Epoch: 2/60 Loss: 4.198048438787461
Epoch: 2/60 Loss: 4.266749669075012
Epoch: 2/60 Loss: 4.121604875087738
Epoch: 2/60 Loss: 4.362252393245697
Epoch: 2/60 Loss: 4.079461237668991
Epoch: 3/60 Loss: 4.250750797922197
Epoch: 3/60 Loss: 3.9251162407398223
Epoch: 3/60 Loss: 3.914248083591461
Epoch: 3/60 Loss: 3.94316263628006
Epoch: 3/60 Loss: 4.029306303739547
Epoch: 3/60 Loss: 4.083336960792542
Epoch: 3/60 Loss: 4.0299128365516665
Epoch: 3/60 Loss: 4.042876502275467
Epoch: 3/60 Loss: 3.7869373910427093
Epoch: 3/60 Loss: 4.123842885255813
Epoch: 3/60 Loss: 3.9306966569423674
Epoch: 3/60 Loss: 4.0140827729702
Epoch: 3/60 Loss: 3.746992486476898
Epoch: 3/60 Loss: 3.8321164903640748
Epoch: 3/60 Loss: 3.746077346324921
Epoch: 3/60 Loss: 3.8679626944065095
Epoch: 3/60 Loss: 3.910302846431732
Epoch: 3/60 Loss: 4.1163127307891845
Epoch: 3/60 Loss: 4.018224453449249
Epoch: 3/60 Loss: 3.9532341599464416
Epoch: 3/60 Loss: 3.909925265073776
Epoch: 3/60 Loss: 3.8639591455459597
Epoch: 3/60 Loss: 4.18081232213974
Epoch: 3/60 Loss: 3.945679934024811
Epoch: 3/60 Loss: 4.076323775529861
Epoch: 3/60 Loss: 4.16047481584549
Epoch: 3/60 Loss: 4.019626809597016
Epoch: 3/60 Loss: 4.157767161846161
Epoch: 3/60 Loss: 4.025215100288391
Epoch: 3/60 Loss: 3.989033597946167
Epoch: 3/60 Loss: 3.8601762781143187
Epoch: 3/60 Loss: 4.183839636325836
Epoch: 3/60 Loss: 4.022508750200272
Epoch: 3/60 Loss: 3.8207516753673554
Epoch: 3/60 Loss: 3.8664631218910217
Epoch: 3/60 Loss: 3.992371515750885
Epoch: 3/60 Loss: 3.8129169692993163
Epoch: 3/60 Loss: 4.052038418769836
Epoch: 3/60 Loss: 3.994664577484131
Epoch: 3/60 Loss: 3.8779393215179443
Epoch: 3/60 Loss: 3.9145858850479125
Epoch: 3/60 Loss: 3.9229103367328646
Epoch: 3/60 Loss: 3.955868262529373
Epoch: 3/60 Loss: 4.128921581983566
Epoch: 3/60 Loss: 4.026408871173858
Epoch: 3/60 Loss: 4.058253768444061
Epoch: 3/60 Loss: 4.234986638069153
Epoch: 3/60 Loss: 4.196707132339477
Epoch: 3/60 Loss: 4.074207115650177
Epoch: 3/60 Loss: 4.104648168563843
Epoch: 3/60 Loss: 4.12827761554718
Epoch: 3/60 Loss: 4.132703597545624
Epoch: 3/60 Loss: 4.090524292230606
Epoch: 3/60 Loss: 4.1452890601158146
Epoch: 3/60 Loss: 4.129444852590561
Epoch: 3/60 Loss: 4.151323220014572
Epoch: 3/60 Loss: 3.975802195549011
Epoch: 3/60 Loss: 4.234117865085602
Epoch: 3/60 Loss: 3.9730878348350527
Epoch: 4/60 Loss: 4.184733656923408
Epoch: 4/60 Loss: 3.914286875963211
Epoch: 4/60 Loss: 3.865648806810379
Epoch: 4/60 Loss: 3.9025143425464632
Epoch: 4/60 Loss: 3.9858370382785795
Epoch: 4/60 Loss: 4.020926055908203
Epoch: 4/60 Loss: 3.918301112651825
Epoch: 4/60 Loss: 3.9387994062900544
Epoch: 4/60 Loss: 3.6834995160102846
Epoch: 4/60 Loss: 4.005870884895325
Epoch: 4/60 Loss: 3.880285642147064
Epoch: 4/60 Loss: 3.940782643079758
Epoch: 4/60 Loss: 3.660971079349518
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** The biggest change happend when I removed the dropout that I originally had. This changed my model to never go below a loss of 4.0 to beeing able to converge. The reasoning for not doing dropout is that we are not afraid of overfitting, we are actually trying to recreate seinfeld scripts as good as possible.Suddenly the model was able to converge, and I just left the other parameters as they were at that moment. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/usr/local/lib/python3.5/dist-packages/ipykernel_launcher.py:37: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
text[:50]
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
vocab_to_int = {word:i for i,word in enumerate(set(text))}
int_to_vocab = {i:word for i,word in enumerate(set(text))}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||". - Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
punc = {'.':'||Period||',
',':'||Comma||',
'"':'||Quatation_mark||',
';':'||Semicolon||',
'!':'||Exclamation_mark||',
'?':'||Question_mark||',
'(':'||Left_parentheses||',
')':'||Right_parentheses||',
'-':'||Dash||',
'\n':'||Return||'
}
return punc
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
import torch
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
y_len = len(words) - sequence_length
x, y = [], []
for idx in range(0, y_len):
idx_end = sequence_length + idx
x_batch = words[idx:idx_end]
x.append(x_batch)
# print("feature: ",x_batch)
batch_y = words[idx_end]
# print("target: ", batch_y)
y.append(batch_y)
# create Tensor datasets
data = TensorDataset(torch.from_numpy(np.asarray(x)), torch.from_numpy(np.asarray(y)))
# make sure the SHUFFLE your training data
data_loader = DataLoader(data, shuffle=False, batch_size=batch_size)
# return a dataloader
#print(x)
#print(y)
return data_loader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
#words = torch.tensor(words)
word_len = len(words)// batch_size
words = words[:word_len*batch_size]
y_len = len(words) - sequence_length
x, y = [], []
for idx in range(y_len):
idx_max = sequence_length + idx
x_data = words[idx:idx_max]
y_data = words[idx_max]
x.append(x_data)
y.append(y_data)
data = TensorDataset(torch.tensor(x), torch.tensor(y))
data_loader = torch.utils.data.DataLoader(data, shuffle = False, batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]])
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# set class variables
self.vocab_size= vocab_size
self.output_size= output_size
self.embedding_dim= embedding_dim
self.hidden_dim= hidden_dim
self.n_layers= n_layers
# define model layers
self.embed = nn.Embedding(vocab_size, embedding_dim)
self.gru = nn.LSTM(embedding_dim, hidden_size = hidden_dim ,num_layers= n_layers, batch_first=True, dropout = dropout)
self.fc = nn.Linear(hidden_dim, hidden_dim)
self.fc2 = nn.Linear(hidden_dim, output_size)
self.dropout = nn.Dropout(dropout)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# nn_input = nn_input.to(torch.int64)
# print(nn_input.shape)
batch_size = nn_input.size(0)
embeddings = self.embed(nn_input)
# print(embeddings.shape)
lstm_out, hidden = self.gru(embeddings, hidden)
# print(lstm_out.shape)
# print(hidden[0].shape)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
#out = self.dropout(lstm_out)
# print(out.shape)
out = self.fc(lstm_out)
out = self.fc2(out)
out = out.view(batch_size, -1, self.output_size)
# print(out[:, -1].shape)
# return one batch of output word scores and the hidden state
return out[:, -1], hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
weight = next(self.parameters()).data
#print(weight.shape)
#print(weight)
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda()
,weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_()
,weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
# initialize hidden state with zero weights, and move to GPU if available
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
if train_on_gpu:
inp, target = inp.cuda(), target.cuda()
h = tuple([each.data for each in hidden])
rnn.zero_grad()
output, h = rnn(inp,h )
loss = criterion(output, target)
#loss =criterion(output.squeeze(0), target.long())
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
# move data to GPU, if available
loss.backward()
optimizer.step()
# perform backpropagation and optimization
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 12 # of words in a sequence
# Batch Size
batch_size = 256
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 30
# Learning Rate
learning_rate = 0.002
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int.keys())
# Output size
output_size = len(set(int_text))+1
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 1000
len(set(int_text))+1
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
%%time
from workspace_utils import active_session
with active_session():
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
# """
# DON'T MODIFY ANYTHING IN THIS CELL
# """
# # create model and move to gpu if available
# rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
# if train_on_gpu:
# rnn.cuda()
# # defining loss and optimization functions for training
# optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
# criterion = nn.CrossEntropyLoss()
# # training the model
# trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# # saving the trained model
# helper.save_model('./save/trained_rnn', trained_rnn)
# print('Model Trained and Saved')
###Output
_____no_output_____
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** Below are the reasons for deciding the model hyperparameters- Sequence_lengths - I tried smaller sequence_lengths for an epoch and found that smaller sequence lengths train and converges faster. However it did not coverge faster.- Batch size - Lower the batch size higher the training time. So increased the batch size to nominal level. Also it depends on the capacity of the server. - hidden_dim - More the hidden dimension means more the number of LSTM cells. I settled down with 256 because it should not have vanishing gradient issue. - Number of Layers - I selected the number as 3. More the layers the network will be deep. So it might have vanishing gradient problem. - Learning rate - I started with 0.01 as the learning rate and the loss did not decrease much. So settled down with 0.002. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:55: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46366
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
vocab_to_int = {}
int_to_vocab = {}
for word in text:
if word not in vocab_to_int:
word_id = len(vocab_to_int)
vocab_to_int[word] = word_id
int_to_vocab[word_id] = word
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return {
'.': '||Period||',
',': "||Comma||",
'"': "||QuotationMark||",
';': "||Semicolon||",
'!': "||ExclamationMark||",
'?': "||QuestionMark||",
'(': "||LeftParentheses||",
')': "||RightParentheses||",
'-': "||Dash||",
'\n': "||Return||"
}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
print(list(vocab_to_int)[:100])
###Output
['this', 'is', 'out', '||period||', 'and', 'one', 'of', 'the', 'single', 'most', 'enjoyable', 'experiences', 'life', 'people', 'did', 'you', 'ever', 'hear', 'talking', 'about', 'we', 'should', 'go', '||questionmark||', 'what', 'theyre', 'whole', 'thing', '||comma||', 'were', 'all', 'now', 'no', 'home', 'not', 'person', 'here', '||exclamationmark||', 'there', 'are', 'trying', 'to', 'find', 'us', 'they', 'dont', 'know', 'where', '||leftparentheses||', 'on', 'an', 'imaginary', 'phone', '||rightparentheses||', 'ring', 'i', 'cant', 'him', 'he', 'didnt', 'tell', 'me', 'was', 'going', 'must', 'have', 'gone', 'wanna', 'get', 'ready', 'pick', 'clothes', 'right', 'take', 'shower', 'cash', 'your', 'friends', 'car', 'spot', 'reservation', 'then', 'youre', 'standing', 'around', 'do', 'gotta', 'be', 'getting', 'back', 'once', 'sleep', 'up', 'again', 'tomorrow', 'in', 'its', 'my', 'feeling', 'youve']
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
n_samples = len(words) - sequence_length + 1
features = np.zeros((n_samples, sequence_length))
targets = np.zeros(n_samples)
for i in range(n_samples):
start = i
end = i+sequence_length
features[i, :] = words[start:end]
if end == len(words):
targets[i] = words[0]
else:
targets[i] = words[end]
print(features.shape, targets.shape)
return DataLoader(TensorDataset(torch.from_numpy(features), torch.from_numpy(targets)), shuffle=True, batch_size=batch_size)
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
loader = batch_data([1,2,3,4,5,6,7], 4, 1)
for i_batch, sample_batched in enumerate(loader):
print(i_batch, sample_batched)
###Output
(4, 4) (4,)
0 [tensor([[1., 2., 3., 4.]], dtype=torch.float64), tensor([5.], dtype=torch.float64)]
1 [tensor([[4., 5., 6., 7.]], dtype=torch.float64), tensor([1.], dtype=torch.float64)]
2 [tensor([[3., 4., 5., 6.]], dtype=torch.float64), tensor([7.], dtype=torch.float64)]
3 [tensor([[2., 3., 4., 5.]], dtype=torch.float64), tensor([6.], dtype=torch.float64)]
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
(46, 5) (46,)
torch.Size([10, 5])
tensor([[19., 20., 21., 22., 23.],
[ 4., 5., 6., 7., 8.],
[42., 43., 44., 45., 46.],
[27., 28., 29., 30., 31.],
[ 1., 2., 3., 4., 5.],
[26., 27., 28., 29., 30.],
[12., 13., 14., 15., 16.],
[15., 16., 17., 18., 19.],
[41., 42., 43., 44., 45.],
[ 7., 8., 9., 10., 11.]], dtype=torch.float64)
torch.Size([10])
tensor([24., 9., 47., 32., 6., 31., 17., 20., 46., 12.], dtype=torch.float64)
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
def one_hot(v, vocab_size):
r = torch.zeros((v.shape[0], v.shape[1], vocab_size), dtype=torch.float)
for batch in range(v.shape[0]):
for word in range(v.shape[1]):
#print(v[batch,word].int())
r[batch, word, v[batch, word].int()] = 1.
if train_on_gpu:
r = r.cuda()
return r
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.vocab_size = vocab_size
self.output_size = output_size
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.n_layers = n_layers
self.dropout = dropout
#print(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout)
#print(embedding_dim)
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
#one_hot_input = one_hot(nn_input, self.vocab_size)
#print(nn_input.shape, nn_input.dtype)
embedding = self.embedding(nn_input.long())
#print(nn_input.shape, one_hot_input.shape)
s, h_1 = self.lstm(embedding, hidden)
#print("s.shape", s.shape)
s = s[:, -1, :]
#print("s_.shape", s.shape)
#print(h_1[0].shape, h_1[1].shape)
s = s.contiguous().view(-1, self.hidden_dim)
out = self.fc(s)
#print(out.shape)
# return one batch of output word scores and the hidden state
return out, h_1
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
#print("init_hidden", batch_size)
# initialize hidden state with zero weights, and move to GPU if available
h_0 = torch.zeros((self.n_layers, batch_size, self.hidden_dim), dtype=torch.float)
c_0 = torch.zeros((self.n_layers, batch_size, self.hidden_dim), dtype=torch.float)
if train_on_gpu:
h_0 = h_0.cuda()
c_0 = c_0.cuda()
return (h_0, c_0)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def one_hot_target(v, vocab_size):
r = torch.zeros((v.shape[0], vocab_size), dtype=torch.float)
for batch in range(v.shape[0]):
r[batch, v[batch]] = 1.
if train_on_gpu:
r = r.cuda()
return r
def detach_hidden(h):
if isinstance(h, torch.Tensor):
return h.detach()
else:
return tuple(detach_hidden(v) for v in h)
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
inp = inp.cuda()
rnn.zero_grad()
out, hidden = rnn(inp, hidden)
# move data to GPU, if available
out = out.cuda()
# perform backpropagation and optimization
#target_one_hot = one_hot_target(target, out.shape[1])
#_, out_max = out.max(dim=1)
#print(target.shape, out.shape, out_max.shape, target_one_hot.shape)
#print(type(target), type(out), type(out_max), type(target_one_hot))
#print(target.dtype, out.dtype, out_max.dtype, target_one_hot.dtype)
loss = criterion(out, target.long().cuda())
loss.backward()
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), detach_hidden(hidden)
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
batch_avg_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
print(f'Epoch {epoch_i}')
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
avg = np.average(batch_losses)
print('Epoch: {:>4}/{:<4} Batch: {} Loss: {}\n'.format(
epoch_i, n_epochs, batch_i, avg))
batch_losses = []
batch_avg_losses = avg
torch.cuda.empty_cache()
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_avg_losses)))
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 15 # of words in a sequence
# Batch Size
batch_size = 100
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(int_to_vocab)
print(len(int_text), vocab_size)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 800
# Number of RNN Layers
n_layers = 3
# Show stats for every n number of batches
show_every_n_batches = 1000
###Output
892114 21387
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch 1
Epoch: 1/10 Batch: 1000 Loss: 5.853621798992157
Epoch: 1/10 Batch: 2000 Loss: 5.244606633424759
Epoch: 1/10 Batch: 3000 Loss: 4.803017456054688
Epoch: 1/10 Batch: 4000 Loss: 4.637760216712952
Epoch: 1/10 Batch: 5000 Loss: 4.531850752353669
Epoch: 1/10 Batch: 6000 Loss: 4.470599411249161
Epoch: 1/10 Batch: 7000 Loss: 4.41624275636673
Epoch: 1/10 Batch: 8000 Loss: 4.38442890548706
Epoch: 1/10 Loss: 4.38442890548706
Epoch 2
Epoch: 2/10 Batch: 1000 Loss: 4.242424337768356
Epoch: 2/10 Batch: 2000 Loss: 4.149412917852402
Epoch: 2/10 Batch: 3000 Loss: 4.138409474134445
Epoch: 2/10 Batch: 4000 Loss: 4.113071617841721
Epoch: 2/10 Batch: 5000 Loss: 4.102521020889283
Epoch: 2/10 Batch: 6000 Loss: 4.0857564868927
Epoch: 2/10 Batch: 7000 Loss: 4.048622624874115
Epoch: 2/10 Batch: 8000 Loss: 4.047781325817108
Epoch: 2/10 Loss: 4.047781325817108
Epoch 3
Epoch: 3/10 Batch: 1000 Loss: 3.969946078370475
Epoch: 3/10 Batch: 2000 Loss: 3.86342317032814
Epoch: 3/10 Batch: 3000 Loss: 3.85399765753746
Epoch: 3/10 Batch: 4000 Loss: 3.873039680480957
Epoch: 3/10 Batch: 5000 Loss: 3.8547728328704833
Epoch: 3/10 Batch: 6000 Loss: 3.8629228987693787
Epoch: 3/10 Batch: 7000 Loss: 3.859683215856552
Epoch: 3/10 Batch: 8000 Loss: 3.8698759377002716
Epoch: 3/10 Loss: 3.8698759377002716
Epoch 4
Epoch: 4/10 Batch: 1000 Loss: 3.7800804308961293
Epoch: 4/10 Batch: 2000 Loss: 3.681982980489731
Epoch: 4/10 Batch: 3000 Loss: 3.690797383785248
Epoch: 4/10 Batch: 4000 Loss: 3.7203687834739685
Epoch: 4/10 Batch: 5000 Loss: 3.718963539123535
Epoch: 4/10 Batch: 6000 Loss: 3.7331083700656893
Epoch: 4/10 Batch: 7000 Loss: 3.73202507686615
Epoch: 4/10 Batch: 8000 Loss: 3.751604646921158
Epoch: 4/10 Loss: 3.751604646921158
Epoch 5
Epoch: 5/10 Batch: 1000 Loss: 3.6497513016706207
Epoch: 5/10 Batch: 2000 Loss: 3.5666191935539246
Epoch: 5/10 Batch: 3000 Loss: 3.5738594584465027
Epoch: 5/10 Batch: 4000 Loss: 3.596497260093689
Epoch: 5/10 Batch: 5000 Loss: 3.6187335658073425
Epoch: 5/10 Batch: 6000 Loss: 3.602631780385971
Epoch: 5/10 Batch: 7000 Loss: 3.6169976077079773
Epoch: 5/10 Batch: 8000 Loss: 3.617536381483078
Epoch: 5/10 Loss: 3.617536381483078
Epoch 6
Epoch: 6/10 Batch: 1000 Loss: 3.534330853318746
Epoch: 6/10 Batch: 2000 Loss: 3.4614045357704164
Epoch: 6/10 Batch: 3000 Loss: 3.4708561041355135
Epoch: 6/10 Batch: 4000 Loss: 3.4790310962200164
Epoch: 6/10 Batch: 5000 Loss: 3.503310553073883
Epoch: 6/10 Batch: 6000 Loss: 3.504567188501358
Epoch: 6/10 Batch: 7000 Loss: 3.528929073572159
Epoch: 6/10 Batch: 8000 Loss: 3.536658666372299
Epoch: 6/10 Loss: 3.536658666372299
Epoch 7
Epoch: 7/10 Batch: 1000 Loss: 3.457180223137311
Epoch: 7/10 Batch: 2000 Loss: 3.3559438667297363
Epoch: 7/10 Batch: 3000 Loss: 3.3687479209899904
Epoch: 7/10 Batch: 4000 Loss: 3.388079210996628
Epoch: 7/10 Batch: 5000 Loss: 3.424742641210556
Epoch: 7/10 Batch: 6000 Loss: 3.421554022073746
Epoch: 7/10 Batch: 7000 Loss: 3.4435657577514647
Epoch: 7/10 Batch: 8000 Loss: 3.453924249649048
Epoch: 7/10 Loss: 3.453924249649048
Epoch 8
Epoch: 8/10 Batch: 1000 Loss: 3.364710630941118
Epoch: 8/10 Batch: 2000 Loss: 3.269726131916046
Epoch: 8/10 Batch: 3000 Loss: 3.302654886007309
Epoch: 8/10 Batch: 4000 Loss: 3.3127076342105863
Epoch: 8/10 Batch: 5000 Loss: 3.332404126405716
Epoch: 8/10 Batch: 6000 Loss: 3.3291565313339233
Epoch: 8/10 Batch: 7000 Loss: 3.368938497543335
Epoch: 8/10 Batch: 8000 Loss: 3.377081280231476
Epoch: 8/10 Loss: 3.377081280231476
Epoch 9
Epoch: 9/10 Batch: 1000 Loss: 3.2800091848219513
Epoch: 9/10 Batch: 2000 Loss: 3.215510542869568
Epoch: 9/10 Batch: 3000 Loss: 3.2248386573791503
Epoch: 9/10 Batch: 4000 Loss: 3.2357929401397705
Epoch: 9/10 Batch: 5000 Loss: 3.257990814447403
Epoch: 9/10 Batch: 6000 Loss: 3.270929653644562
Epoch: 9/10 Batch: 7000 Loss: 3.2904696140289307
Epoch: 9/10 Batch: 8000 Loss: 3.3084900839328766
Epoch: 9/10 Loss: 3.3084900839328766
Epoch 10
Epoch: 10/10 Batch: 1000 Loss: 3.2151777885780555
Epoch: 10/10 Batch: 2000 Loss: 3.133549928188324
Epoch: 10/10 Batch: 3000 Loss: 3.155182463645935
Epoch: 10/10 Batch: 4000 Loss: 3.153361447095871
Epoch: 10/10 Batch: 5000 Loss: 3.1879862921237945
Epoch: 10/10 Batch: 6000 Loss: 3.220042049884796
Epoch: 10/10 Batch: 7000 Loss: 3.2258312377929688
Epoch: 10/10 Batch: 8000 Loss: 3.2454964752197264
Epoch: 10/10 Loss: 3.2454964752197264
Model Trained and Saved
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:**My strategy to find the hyperparameters was, as always, trial-and-error. I tried more than 10 settings since I couldn't get the good loss. But it turned out that the real problem with my model was not the hyperparameters, but it was the dropout layer after FC layer. I could get very good loss after removing the dropout layer.I first used learning_rate=0.01, but the loss was increased with the learning rate, so I reduced it to 0.001. Then the loss started to decrease.I chose the embedding_dim based on the advice in the word2vec lecture. I first started with much larger embedding_dims like 1000. But after reviewing some of the materials about word embedding, I realized that bigger embedding_dim is not needed in this project.Regarding the size of the model, I first started with much smaller models, e.g. hidden_dim=200 and n_layers=2. I couldn't get a good loss with such a small model, so I enlarged the model. But I didn't increase the n_layers more than 3 based on the advice given in "Hyperparameters" lecture(Number of Hidden Units/Layers section and RNN Hyperparameters section).The final question is how to set the sequence_length. I first looked at the average number of words in each line, which is about 5. I wanted the model to be aware of, at least, the last two~three sentences, so I chose 15. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq.cpu(), -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
chars = tuple(set(text))
int_to_vocab = dict(enumerate(chars,1))
vocab_to_int = {ch: ii for ii, ch in int_to_vocab.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token = {
'.' : '||Period||',
',' : '||Comma||',
'"' : '||Quotation_Mark}}',
';' : '||Semicolon||',
'!' : '||Exclamation_Mark||',
'?' : '||Question_Mark||',
'(' : '||Left_Parentheses||',
')' : '||Right_Parentheses||',
'-' : '||Dash||',
'\n': '||Return||'
}
return token
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
import torch
torch.backends.cudnn.enabled=False
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
len_words = len(words)
feature_tensors = []
target_tensors = []
for idx in range(0,len_words-sequence_length):
if (idx+sequence_length < len_words):
feature_tensors.append(words[idx:idx+sequence_length])
target_tensors.append(words[idx+sequence_length])
data = TensorDataset(torch.LongTensor(feature_tensors), (torch.LongTensor(target_tensors)))
data_loader = DataLoader(data, shuffle=True, batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[18, 19, 20, 21, 22],
[33, 34, 35, 36, 37],
[38, 39, 40, 41, 42],
[25, 26, 27, 28, 29],
[39, 40, 41, 42, 43],
[26, 27, 28, 29, 30],
[10, 11, 12, 13, 14],
[27, 28, 29, 30, 31],
[ 8, 9, 10, 11, 12],
[15, 16, 17, 18, 19]])
torch.Size([10])
tensor([23, 38, 43, 30, 44, 31, 15, 32, 13, 20])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define model layers
# embedding and lstm layers
self.embed = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
# dropout layer
self.dropout = nn.Dropout(0.1)
# linear and sigmoid layers
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, x, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function (x is batch_size x seq_length)
batch_size = x.size(0)
# pass through embedding layer and lstm
embeds = self.embed(x.long())
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs (in order)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout for regularization and pass through fully-connected layer
out = self.dropout(lstm_out)
out = self.fc(out)
# get the last batch of word scores
out = out.view(batch_size, -1, self.output_size)
last_batch_scores = out[:, -1]
# return one batch of output word scores and the hidden state
return last_batch_scores, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inputs, targets, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if(train_on_gpu):
inputs, targets = inputs.cuda(), targets.cuda()
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
hidden = tuple([each.data for each in hidden])
# perform backpropagation and optimization
# zero accumulated gradients
rnn.zero_grad()
# get the output from the model
output, hidden = rnn(inputs, hidden)
# calculate the loss and perform backprop
loss = criterion(output.squeeze(), targets.long())
loss.backward()
# help prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 15 # of words in a sequence
# Batch Size
batch_size = 256
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 20
# Learning Rate
learning_rate = 0.0005
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = len(vocab_to_int)
# Embedding Dimension
embedding_dim = 256
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
print(rnn)
if train_on_gpu:
print("Training on gpu")
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
RNN(
(embed): Embedding(21388, 256)
(lstm): LSTM(256, 256, num_layers=2, batch_first=True, dropout=0.5)
(dropout): Dropout(p=0.1)
(fc): Linear(in_features=256, out_features=21388, bias=True)
)
Training on gpu
Training for 20 epoch(s)...
Epoch: 1/20 Loss: 5.597207285881042
Epoch: 1/20 Loss: 4.945136706352234
Epoch: 1/20 Loss: 4.688877922058105
Epoch: 1/20 Loss: 4.557778511524201
Epoch: 1/20 Loss: 4.472045307159424
Epoch: 1/20 Loss: 4.3908713493347165
Epoch: 2/20 Loss: 4.280853354591664
Epoch: 2/20 Loss: 4.18701998090744
Epoch: 2/20 Loss: 4.147230245113373
Epoch: 2/20 Loss: 4.138403913497925
Epoch: 2/20 Loss: 4.110214523315429
Epoch: 2/20 Loss: 4.059590896606445
Epoch: 3/20 Loss: 4.031055136182444
Epoch: 3/20 Loss: 3.9617759823799132
Epoch: 3/20 Loss: 3.9564146003723146
Epoch: 3/20 Loss: 3.9414126343727114
Epoch: 3/20 Loss: 3.927466591835022
Epoch: 3/20 Loss: 3.9192955713272095
Epoch: 4/20 Loss: 3.8769721621420326
Epoch: 4/20 Loss: 3.8304171710014345
Epoch: 4/20 Loss: 3.814621898174286
Epoch: 4/20 Loss: 3.8046879591941836
Epoch: 4/20 Loss: 3.8182865715026857
Epoch: 4/20 Loss: 3.806731687068939
Epoch: 5/20 Loss: 3.756708551955417
Epoch: 5/20 Loss: 3.7152124028205873
Epoch: 5/20 Loss: 3.7155078110694886
Epoch: 5/20 Loss: 3.7212602343559267
Epoch: 5/20 Loss: 3.720372211933136
Epoch: 5/20 Loss: 3.7307218408584593
Epoch: 6/20 Loss: 3.6717124288159657
Epoch: 6/20 Loss: 3.6385444107055664
Epoch: 6/20 Loss: 3.642626296043396
Epoch: 6/20 Loss: 3.6448024158477783
Epoch: 6/20 Loss: 3.634449579715729
Epoch: 6/20 Loss: 3.65169495344162
Epoch: 7/20 Loss: 3.602716451011053
Epoch: 7/20 Loss: 3.557685826301575
Epoch: 7/20 Loss: 3.574955069065094
Epoch: 7/20 Loss: 3.5812946939468384
Epoch: 7/20 Loss: 3.5875872387886045
Epoch: 7/20 Loss: 3.589073181629181
Epoch: 8/20 Loss: 3.5334425982905597
Epoch: 8/20 Loss: 3.5055549998283384
Epoch: 8/20 Loss: 3.5164567112922667
Epoch: 8/20 Loss: 3.5271290922164917
Epoch: 8/20 Loss: 3.5304792709350585
Epoch: 8/20 Loss: 3.5377687726020812
Epoch: 9/20 Loss: 3.4811665481183587
Epoch: 9/20 Loss: 3.450687542915344
Epoch: 9/20 Loss: 3.4531266360282897
Epoch: 9/20 Loss: 3.4672906193733217
Epoch: 9/20 Loss: 3.4922217712402346
Epoch: 9/20 Loss: 3.483692674160004
Epoch: 10/20 Loss: 3.441195634564733
Epoch: 10/20 Loss: 3.4057532148361207
Epoch: 10/20 Loss: 3.4243404693603514
Epoch: 10/20 Loss: 3.4185665636062623
Epoch: 10/20 Loss: 3.4349712772369383
Epoch: 10/20 Loss: 3.435743576526642
Epoch: 11/20 Loss: 3.395347802135033
Epoch: 11/20 Loss: 3.3523110408782957
Epoch: 11/20 Loss: 3.369799837112427
Epoch: 11/20 Loss: 3.385363309383392
Epoch: 11/20 Loss: 3.404648055076599
Epoch: 11/20 Loss: 3.406935683250427
Epoch: 12/20 Loss: 3.3569239232598282
Epoch: 12/20 Loss: 3.3299567284584044
Epoch: 12/20 Loss: 3.333758858203888
Epoch: 12/20 Loss: 3.3565344247817994
Epoch: 12/20 Loss: 3.3535110931396486
Epoch: 12/20 Loss: 3.3671498990058897
Epoch: 13/20 Loss: 3.3291901742539753
Epoch: 13/20 Loss: 3.2964145069122313
Epoch: 13/20 Loss: 3.3002819323539736
Epoch: 13/20 Loss: 3.3061969776153566
Epoch: 13/20 Loss: 3.3083550405502318
Epoch: 13/20 Loss: 3.3414688987731935
Epoch: 14/20 Loss: 3.2864690192831243
Epoch: 14/20 Loss: 3.2616634464263914
Epoch: 14/20 Loss: 3.268913824558258
Epoch: 14/20 Loss: 3.2795535778999327
Epoch: 14/20 Loss: 3.293611352443695
Epoch: 14/20 Loss: 3.316191336631775
Epoch: 15/20 Loss: 3.2559234570196973
Epoch: 15/20 Loss: 3.2291657261848448
Epoch: 15/20 Loss: 3.243384324550629
Epoch: 15/20 Loss: 3.2465188884735108
Epoch: 15/20 Loss: 3.26113570022583
Epoch: 15/20 Loss: 3.2806422657966614
Epoch: 16/20 Loss: 3.2373093950554606
Epoch: 16/20 Loss: 3.205128993034363
Epoch: 16/20 Loss: 3.214359392642975
Epoch: 16/20 Loss: 3.235689097881317
Epoch: 16/20 Loss: 3.2317296509742737
Epoch: 16/20 Loss: 3.2436203575134277
Epoch: 17/20 Loss: 3.210456041301169
Epoch: 17/20 Loss: 3.1856013131141663
Epoch: 17/20 Loss: 3.1873209314346314
Epoch: 17/20 Loss: 3.215623960494995
Epoch: 17/20 Loss: 3.223004135131836
Epoch: 17/20 Loss: 3.220896931171417
Epoch: 18/20 Loss: 3.1717005427775344
Epoch: 18/20 Loss: 3.158070963382721
Epoch: 18/20 Loss: 3.179109445095062
Epoch: 18/20 Loss: 3.185102249622345
Epoch: 18/20 Loss: 3.1850178418159483
Epoch: 18/20 Loss: 3.201482448577881
Epoch: 19/20 Loss: 3.162135840916052
Epoch: 19/20 Loss: 3.1397451519966126
Epoch: 19/20 Loss: 3.146389533996582
Epoch: 19/20 Loss: 3.1490698828697203
Epoch: 19/20 Loss: 3.167155300617218
Epoch: 19/20 Loss: 3.1828755474090578
Epoch: 20/20 Loss: 3.140131360389353
Epoch: 20/20 Loss: 3.122141387939453
Epoch: 20/20 Loss: 3.1299677457809447
Epoch: 20/20 Loss: 3.12221867275238
Epoch: 20/20 Loss: 3.1405500621795652
Epoch: 20/20 Loss: 3.1553693571090697
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** Since the model should be looking at appropriate sentence length for succesfull conversation plot, I chose `sequence_length = 15`. I played a lot around `batch_size` and found out that higher batch sizes (around 128) converges the model faster. Also, `learning_rate` is kept very low so that the model learns smoothly and the training loss decreases after every epoch. `hidden_dim = 256` best performed during the experimental runs and `n_layers = 2` was enough (and faster than 3) to achieve the training loss less than 3.5 for the submission. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq.cpu(), -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
jerry:.
elaine: oh my god.
jerry: well, you know, i was wondering..
kramer: well, i think you could get together, we can do this.
kramer: oh, yeah!
elaine: hey.
elaine: hey!
elaine: oh, hi!
george:(to jerry) what?(to kramer) i don't believe this, jerry.(they both move)
kramer: well, you gotta get a cab.
jerry: oh, no! that's what i did.
elaine: i don't know how to get you. i don't want to get it.
jerry: what are you doing?
elaine: i can't believe this!
jerry: oh, no...
kramer:(pointing at the counter) what about the drake?
george: i don't know.
george: i think you were going to be in the car.
kramer: well, i'm not going to get some sleep.
elaine:(to jerry) hey, i gotta go to the movies.(to jerry) i know you.
kramer: yeah!
jerry: i know.
george: what? why don't you just sit here in the shower?
jerry: i know.
elaine:(to jerry) i don't think so. i mean, i was thinking i could do a good 'hello' about the show.
george: i think i should.
jerry: well, what about the guy that was wearing a cape?!
elaine:(laughs.) you know what?
jerry: what is that?
jerry: i don't know.
elaine: i don't want to tell you something. you know what i do. i'm not going in...
jerry: oh, i think i was just curious.
kramer: yeah, that's a lot of pressure than a man.
kramer:(pointing) oh, that's the way i ever saw.
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
vocab_to_int = {}
int_to_vocab = {}
# Start from 0, no need for a special character for padding
index = 0
for word in text:
if word not in vocab_to_int:
vocab_to_int[word] = index
int_to_vocab[index] = word
index = index + 1
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
punctuation_to_token = {}
punctuation_to_token["."] = "||Period||"
punctuation_to_token[","] = "||Comma||"
punctuation_to_token['"'] = "||Quotation_Mark||"
punctuation_to_token[';'] = "||Semicolon||"
punctuation_to_token['!'] = "||Exclamation_Mark||"
punctuation_to_token['?'] = "||Question_Mark||"
punctuation_to_token['('] = "||Left_Parentheses||"
punctuation_to_token[')'] = "||Right_Parentheses||"
punctuation_to_token["-"] = "||Dash||"
punctuation_to_token["\n"] = "||Return||"
return punctuation_to_token
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
features_count = len(words) - sequence_length
features = np.zeros((features_count, sequence_length), dtype=int)
targets = np.zeros((features_count, 1), dtype=int)
for ii in range(features_count):
start = ii
end = start + sequence_length # This is guaranteed to be < len(words)
features[ii, :] = words[start : end]
targets[ii] = words[end]
train_test_fraction = 0.9
split_idx = int(len(features) * train_test_fraction)
train_x, valid_x = features[:split_idx], features[split_idx:]
train_y, valid_y = targets[:split_idx], targets[split_idx:]
train_dataset = TensorDataset(torch.from_numpy(train_x), torch.from_numpy(train_y).squeeze())
validation_dataset = TensorDataset(torch.from_numpy(valid_x), torch.from_numpy(valid_y).squeeze())
train_dataloader = DataLoader(train_dataset, shuffle=True, batch_size=batch_size)
validation_dataloader = DataLoader(validation_dataset, shuffle=True, batch_size=batch_size)
# return a dataloader
return train_dataloader, validation_dataloader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader, _ = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 19, 20, 21, 22, 23],
[ 30, 31, 32, 33, 34],
[ 10, 11, 12, 13, 14],
[ 6, 7, 8, 9, 10],
[ 31, 32, 33, 34, 35],
[ 38, 39, 40, 41, 42],
[ 27, 28, 29, 30, 31],
[ 25, 26, 27, 28, 29],
[ 39, 40, 41, 42, 43],
[ 33, 34, 35, 36, 37]])
torch.Size([10])
tensor([ 24, 35, 15, 11, 36, 43, 32, 30, 44, 38])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
# dropout not used
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, batch_first=True, dropout=0)
# self.dropout = nn.Dropout(dropout)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
x = self.embedding(nn_input)
out, hidden = self.lstm(x, hidden)
# Removed the dropout
# out = self.dropout(out)
# Flatten, input to dense layer
out = out.contiguous().view(-1, self.hidden_dim)
out = self.fc(out)
out = out.view(batch_size, -1, self.output_size)
# return one batch of output word scores and the hidden state
return out[:, -1], hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if(train_on_gpu):
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
hidden = tuple([each.data for each in hidden])
rnn.zero_grad()
output, h = rnn(inp, hidden)
loss = criterion(output.squeeze(), target.long())
loss.backward()
clip=5 # gradient clipping
nn.utils.clip_grad_norm_(rnn.parameters(), clip)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_path = "trained_rnn.pt"
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100, dev_mode=False):
batch_losses = []
valid_loss_min = np.Inf
best_model = rnn
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
dev_mode_counter = 0
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
if dev_mode and dev_mode_counter > 3:
break
dev_mode_counter += 1
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if dev_mode or (batch_i % show_every_n_batches) == 0:
print('Epoch: {:>4}/{:<4} Training loss: {}'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# Validation loss
rnn.eval()
with torch.no_grad():
val_h = rnn.init_hidden(batch_size)
val_losses = []
validation_counter = 0
val_batches = len(validation_loader.dataset) // batch_size
for val_batch, (val_inputs, val_labels) in enumerate(validation_loader, 1):
if dev_mode and validation_counter > 3:
break
validation_counter += 1
# make sure you iterate over completely full batches, only
if(val_batch > val_batches):
break
# Creating new variables for the hidden state
val_h = tuple([each.data for each in val_h])
if(train_on_gpu):
val_inputs, val_labels = val_inputs.cuda(), val_labels.cuda()
val_output, val_h = rnn(val_inputs, val_h)
val_loss = criterion(val_output.squeeze(), val_labels.long())
val_losses.append(val_loss.item())
rnn.train()
valid_loss = np.mean(val_losses)
print("Validation Loss: {:.6f}\n".format(valid_loss))
if valid_loss <= valid_loss_min:
print(f"Old valid_loss_min ({valid_loss_min}), new valid_loss ({valid_loss}). Saving the model.")
torch.save(rnn.state_dict(), save_path)
valid_loss_min = valid_loss
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 8 # of words in a sequence
# Batch Size
batch_size = 64
# data loader - do not change
train_loader, validation_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
dev_mode = False
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int) # No need for zero-padding
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 500
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = int(len(train_loader.dataset)//batch_size / 5)
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from workspace_utils import active_session
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
with active_session():
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches, dev_mode=dev_mode)
# saving the trained model
# helper.save_model('./save/trained_rnn', trained_rnn)
# print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Training loss: 4.757854051255476
Epoch: 1/10 Training loss: 4.244379404960471
Epoch: 1/10 Training loss: 4.12962354517116
Epoch: 1/10 Training loss: 4.052596922745597
Epoch: 1/10 Training loss: 4.00840830118735
Validation Loss: 4.354143
Old valid_loss_min (inf), new valid_loss (4.354142875869926). Saving the model.
Epoch: 2/10 Training loss: 3.7558324923311965
Epoch: 2/10 Training loss: 3.7620827691780234
Epoch: 2/10 Training loss: 3.7922432279149674
Epoch: 2/10 Training loss: 3.7780646311424317
Epoch: 2/10 Training loss: 3.7999744747871893
Validation Loss: 4.369741
Epoch: 3/10 Training loss: 3.514387589424623
Epoch: 3/10 Training loss: 3.554930131812645
Epoch: 3/10 Training loss: 3.5986679902158443
Epoch: 3/10 Training loss: 3.617089119599891
Epoch: 3/10 Training loss: 3.6443392696395818
Validation Loss: 4.434229
Epoch: 4/10 Training loss: 3.3298647260228775
Epoch: 4/10 Training loss: 3.3916470419129987
Epoch: 4/10 Training loss: 3.433064693471453
Epoch: 4/10 Training loss: 3.479323778270297
Epoch: 4/10 Training loss: 3.5277685745630003
Validation Loss: 4.527420
Epoch: 5/10 Training loss: 3.1797384595243803
Epoch: 5/10 Training loss: 3.241211139333823
Epoch: 5/10 Training loss: 3.3131291909063894
Epoch: 5/10 Training loss: 3.345803231345367
Epoch: 5/10 Training loss: 3.3945727029992745
Validation Loss: 4.648780
Epoch: 6/10 Training loss: 3.046087589668724
Epoch: 6/10 Training loss: 3.1217415750905686
Epoch: 6/10 Training loss: 3.1935753337697426
Epoch: 6/10 Training loss: 3.2401476218055185
Epoch: 6/10 Training loss: 3.287457555354283
Validation Loss: 4.763307
Epoch: 7/10 Training loss: 2.948822335543125
Epoch: 7/10 Training loss: 3.0267605736893266
Epoch: 7/10 Training loss: 3.0821627108116654
Epoch: 7/10 Training loss: 3.1334047317504883
Epoch: 7/10 Training loss: 3.196608396581845
Validation Loss: 4.856435
Epoch: 8/10 Training loss: 2.8503221546620097
Epoch: 8/10 Training loss: 2.9218662947285647
Epoch: 8/10 Training loss: 3.000046319722084
Epoch: 8/10 Training loss: 3.0440855964145226
Epoch: 8/10 Training loss: 3.1074035792960832
Validation Loss: 4.971439
Epoch: 9/10 Training loss: 2.759586052668432
Epoch: 9/10 Training loss: 2.843286538703739
Epoch: 9/10 Training loss: 2.92232500342745
Epoch: 9/10 Training loss: 2.9734146603078373
Epoch: 9/10 Training loss: 3.032673814943845
Validation Loss: 5.062698
Epoch: 10/10 Training loss: 2.6831030471717376
Epoch: 10/10 Training loss: 2.7677098041323847
Epoch: 10/10 Training loss: 2.8633623726579858
Epoch: 10/10 Training loss: 2.918185781415785
Epoch: 10/10 Training loss: 2.9696068574157355
Validation Loss: 5.161663
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) *Dropout*: I intially used a drop-out probability of 0.5, but found that the model was underfitting and the training error couldn't reach the requested max 0f 3.5. So I changed the hyper-parameter to 0, making the model effectively not use dropout. This resulted in a final training loss of 2.97. *sequence_length*: By looking at some examples from the training corpus, 8 looked like a reasonable value. *batch_size*: 64 is a standard value, and no out-of-memory errors happened. *num_epochs*: 10 was enough to go below the requested 3.5 training loss. *learning_rate*: 0.001 resulted in the model learning in 10 epochs, and decreasing the training error. *n_layers*: 2, or 3 are common values. I choose 2 and the results were satisfactory. *hidden_dim*: Choose 256 from my previous experinece with the sentiment analysis exercise. The results turned out to be satisfactory. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
# trained_rnn = helper.load_model('./save/trained_rnn')
# load the model that got the best validation accuracy
trained_rnn.load_state_dict(torch.load(save_path))
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
jerry: you have a little bit of the whole day of the way to the hospital.
jerry: i dont think so.
jerry: so, i can't go.
elaine: well, i think i can get a lot to do with a few. you know, i just wanted to be a great way.
jerry: well, i can't believe i was just going to be a little more than a little good thing.
elaine:(to the phone) i got a lot of time.
elaine:(looking at george) i think i can do anything.
elaine: i think you know..
elaine:(smiling) yeah, i don't want any money.
george: you don't know?
kramer: well, i'm not gonna go back to the hospital...(george enters)
kramer: well, i think you got your own life, and you can just go.(she turns) what are you doing?
jerry: yeah, i don't even think i can get the car!
kramer: no, i'm sorry. you know how i got the same life in a lot of a lot, you know what i think i do like?
jerry: no, no, no, i can't get my money.
jerry: i know i was just wondering what i can.
jerry: i know, i don't even think so.
kramer:(leaving) i can't tell you what, i'm sorry.
jerry: i have to get it back and just take a drink to a few time, i got the way you think i could get out of your car?
george: i don't know how you said, you know i know, i know what, i can't tell you what, i think you got it?
george: i know, i don't know what, i can't...
elaine: what? what is that?(to jerry) i don't think i can...
jerry:
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_count = Counter(text)
vocab = sorted(word_count, key=word_count.get, reverse=True)
vocab_to_int = dict([(w,ii) for ii, w in enumerate(vocab)])
int_to_vocab = dict([(v,k) for k, v in vocab_to_int.items()])
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token_dict = {
'.' : '<PERIOD>',
',' : '<COMMA>',
'"': '<QUOTATION_MARK>',
';': '<SEMICOLON>',
'!': '<EXCLAMATION>',
'?': '<QUESTION>',
'(': '<LEFT_PARA>',
')': '<RIGHT_PARA>',
'-': '<DASH>',
'\n': '<RET>'
}
return token_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
len(int_text)
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
import numpy as np
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
possible_batches = len(words)//batch_size
words = words[:possible_batches*batch_size]
feature = np.array([words[i:(i+sequence_length)] for i in range(len(words)-sequence_length)])
target = np.array([words[i+sequence_length] for i in range(len(words)-sequence_length)])
feature_tensor, target_tensor = torch.from_numpy(feature), torch.from_numpy(target)
data = TensorDataset(feature_tensor, target_tensor)
data_loader = DataLoader(data,shuffle=True, batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 36, 37, 38, 39, 40],
[ 4, 5, 6, 7, 8],
[ 26, 27, 28, 29, 30],
[ 25, 26, 27, 28, 29],
[ 33, 34, 35, 36, 37],
[ 16, 17, 18, 19, 20],
[ 27, 28, 29, 30, 31],
[ 11, 12, 13, 14, 15],
[ 43, 44, 45, 46, 47],
[ 9, 10, 11, 12, 13]])
torch.Size([10])
tensor([ 41, 9, 31, 30, 38, 21, 32, 16, 48, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# set class variables
# define model layers
self.embeddings = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,
dropout=dropout, batch_first=True)
#self.dropout = nn.Dropout(0.25)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
nn_input = nn_input.long()
emb = self.embeddings(nn_input)
lstm_out, hidden = self.lstm(emb, hidden)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
#out = self.dropout(out) # Removing dropout to improve model training.
out = self.fc(lstm_out)
out = out.view(batch_size, -1, self.output_size)
out = out[:,-1]
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
weight = next(self.parameters()).data
# initialize hidden state with zero weights, and move to GPU if available
l1 = weight.new(self.n_layers, batch_size, self.hidden_dim).zero_()
l2 = weight.new(self.n_layers, batch_size, self.hidden_dim).zero_()
if train_on_gpu:
hidden = (l1.cuda(), l2.cuda())
else :
hidden = (l1, l2)
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if train_on_gpu:
inp, target = inp.cuda(), target.cuda()
h = tuple([each.data for each in hidden])
# zero accumulated gradients
rnn.zero_grad()
# perform backpropagation and optimization
out, h = rnn(inp, h)
loss = criterion(out, target)
loss.backward()
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), h
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_rnn` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
min_loss = 3.2
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
counter = 0
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
counter += 1
print('Training batch {}...'.format(counter), end='\r', flush=True)
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
avg_loss = np.average(batch_losses)
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, avg_loss))
if avg_loss < min_loss:
print('loss decreased {:.6f} --> {:.6f}, Saving model..'.format(min_loss, avg_loss,))
helper.save_model('./save/trained_rnn', rnn)
min_loss = avg_loss
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 256
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 300
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 5.371952863693237
loss decreased 9.181360 --> 5.371953, Saving model..
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** :I consulted online papers, articles and tutorials for selecting hyperparams. Unfortunately, hyperparam tuning is extreamly resource intensive task. I intended to start with most commonly used value and experiment by changing values slightly._One of the key learning I had that I started to replicate the model used in sentiment RNN. In doing so I kept my last layer as sigmoid without realizing that unlike sentiment which could 0/1. I needed multi-class classification. After spending 5 hours of my GPU time where the loss did not decline below 9.2. I started looking in code and realized my mistake. After correct my loss quickly declined from 5.7 to 3.9 and just two epochs._- `sequence_length`: 10 appears to be most commonly used value. The logic is stated to be that it is slightly ablive average length of english words (~6-7).- `batch_size` : Batch size suggested in litrature is 3(32, 64, 128, 256). Depending on the memory size available. I have chosen 256.- `num_epochs` : I started experiment with having 20 epochs. After training for 6 epochs my loss came down from 9.8 to 9.23. The loss was declining but very slowly. I consulted online articles and on forums. Hence suggested epochs were between 30 t0 60. I trained model for 35 epochs.- `learning_rate` : After experiment with (0.1, 0.01, 0.001). 0.001 was optimal for me. At 0.01 the loss was not declining at all even after 10 epochs.- `vocab_size`: As per the given text corpus. - `output_size`: Output size is equal to vocab size. Each word in vocab has probablity attached and word with highest probablity is selected as next word.- `embedding_dim`: As suggested in course the embedding dim should be between 200-300.- `hidden_dim`: Hidden Dimension is selected in the range between 200 and 500 as indicated in the course lectures.- `n_layers`: According to litrature, two hidden layers are sufficient any arbitatry function.**References used:** 1. [The Number of Hidden Layers](https://www.researchgate.net/post/How_to_decide_the_number_of_hidden_layers_and_nodes_in_a_hidden_layer)2. [How many hidden units should I use?](http://www.faqs.org/faqs/ai-faq/neural-nets/part3/section-10.html)
###Code
rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
--- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:41: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
counts = Counter(text)
sorted_vocab = sorted(counts, key=counts.get, reverse=True)
int_to_vocab = {ii: word for ii , word in enumerate(sorted_vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
# encode the text
encoded = np.array([vocab_to_int[word] for word in text])
return (vocab_to_int, int_to_vocab)
#counts = Counter(words)
#vocab = sorted(counts, key=counts.get, reverse=True)
#vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
tokens = dict()
tokens['.'] = '<PERIOD>'
tokens[','] = '<COMMA>'
tokens['"'] = '<QUOTATION_MARK>'
tokens[';'] = '<SEMICOLON>'
tokens['!'] = '<EXCLAMATION_MARK>'
tokens['?'] = '<QUESTION_MARK>'
tokens['('] = '<LEFT_PAREN>'
tokens[')'] = '<RIGHT_PAREN>'
tokens['?'] = '<QUESTION_MARK>'
tokens['-'] = '<DASH>'
tokens['\n'] = '<NEW_LINE>'
return tokens
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
def pad_features(reviews_ints, seq_length):
''' Return features of review_ints, where each review is padded with 0's
or truncated to the input seq_length.
'''
# getting the correct rows x cols shape
features = np.zeros((len(reviews_ints), seq_length), dtype=int)
# for each review, I grab that review and
for i, row in enumerate(reviews_ints):
features[i, -len(row):] = np.array(row)[:seq_length]
return features
from torch.utils.data import TensorDataset, DataLoader
import numpy as np
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
# TODO: Implement function
features, targets = [], []
for idx in range(0, (len(words) - sequence_length) ):
features.append(words[idx : idx + sequence_length])
targets.append(words[idx + sequence_length])
#print(features)
#print(targets)
data = TensorDataset(torch.from_numpy(np.asarray(features)), torch.from_numpy(np.asarray(targets)))
data_loader = torch.utils.data.DataLoader(data, shuffle=True , batch_size = batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 32, 33, 34, 35, 36],
[ 0, 1, 2, 3, 4],
[ 14, 15, 16, 17, 18],
[ 15, 16, 17, 18, 19],
[ 34, 35, 36, 37, 38],
[ 24, 25, 26, 27, 28],
[ 9, 10, 11, 12, 13],
[ 37, 38, 39, 40, 41],
[ 19, 20, 21, 22, 23],
[ 41, 42, 43, 44, 45]])
torch.Size([10])
tensor([ 37, 5, 19, 20, 39, 29, 14, 42, 24, 46])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
self.n_layers = n_layers
self.hidden_dim = hidden_dim
self.output_size = output_size
self.token = token_lookup()
self.vocab_to_int, self.int_to_vocab = create_lookup_tables(set(text))
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, self.hidden_dim, self.n_layers, dropout=dropout, batch_first=True)
self.dropout = nn.Dropout(dropout)
self.fc = nn.Linear(hidden_dim, self.output_size)
self.sig = nn.Sigmoid()
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
# embeddings and lstm_out
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
output = self.fc(lstm_out)
# reshape to be batch_size first
output = output.view(batch_size, -1, self.output_size)
out = output[:, -1] # get last batch of labels
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
_____no_output_____
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# move model to GPU, if available
if(train_on_gpu):
rnn.cuda()
# # Creating new variables for the hidden state, otherwise
# # we'd backprop through the entire training history
h = tuple([each.data for each in hidden])
# zero accumulated gradients
rnn.zero_grad()
if(train_on_gpu):
inputs, target = inp.cuda(), target.cuda()
# print(h[0].data)
# get predicted outputs
output, h = rnn(inputs, h)
# calculate loss
loss = criterion(output, target)
# optimizer.zero_grad()
loss.backward()
# 'clip_grad_norm' helps prevent the exploding gradient problem in RNNs / LSTMs
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
return loss.item(), h
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 2
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = 1
# Embedding Dimension
embedding_dim = 300
# Hidden Dimension
hidden_dim = 250
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 2 epoch(s)...
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
_____no_output_____
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
from collections import Counter
word_counter = Counter(text)
sorted_vocab = sorted(word_counter, key=word_counter.get, reverse=True)
int_to_vocab = {index: word for index, word in enumerate(sorted_vocab, 1)}
vocab_to_int = {word: index for index, word in int_to_vocab.items() }
# return tuple
return (vocab_to_int, int_to_vocab)
create_lookup_tables
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token_dict = {
'.':'||period||',
',':'||comma||',
'"':'||quotation_mark||',
';':'||semicolon||',
'!':'||exclamation_mark||',
'?':'||question_mark||',
'(':'||left_parentheses||',
')':'||right_parentheses||',
'-':'||dash||',
'\n':'||return||',
}
return token_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
# stats about vocabulary
print('Unique words: ', len(vocab_to_int) )
print()
# print tokens in first eighty words
print('Tokenized text: \n', int_text[0:80])
# print first eighty words
print('Tokenized text: \n', [int_to_vocab[token] for token in int_text[0:80] ])
###Output
Unique words: 21388
Tokenized text:
[25, 23, 48, 2, 2, 2, 18, 48, 23, 83, 21, 7, 1253, 546, 8783, 7190, 21, 242, 2, 150, 2, 2, 2, 85, 5, 201, 239, 150, 209, 59, 56, 136, 65, 48, 4, 25, 23, 19, 678, 209, 59, 2, 2, 2, 25, 221, 127, 3, 122, 51, 48, 87, 3, 27, 83, 23, 290, 2, 46, 83, 375, 63, 23, 290, 3, 122, 51, 48, 11, 77, 49, 150, 272, 9, 249, 192, 3, 66, 205, 28]
Tokenized text:
['this', 'is', 'out', '||period||', '||period||', '||period||', 'and', 'out', 'is', 'one', 'of', 'the', 'single', 'most', 'enjoyable', 'experiences', 'of', 'life', '||period||', 'people', '||period||', '||period||', '||period||', 'did', 'you', 'ever', 'hear', 'people', 'talking', 'about', 'we', 'should', 'go', 'out', '||question_mark||', 'this', 'is', 'what', 'theyre', 'talking', 'about', '||period||', '||period||', '||period||', 'this', 'whole', 'thing', '||comma||', 'were', 'all', 'out', 'now', '||comma||', 'no', 'one', 'is', 'home', '||period||', 'not', 'one', 'person', 'here', 'is', 'home', '||comma||', 'were', 'all', 'out', '||exclamation_mark||', 'there', 'are', 'people', 'trying', 'to', 'find', 'us', '||comma||', 'they', 'dont', 'know']
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
total_words = len(words)
total_seqs = -(-total_words//sequence_length) #calculate the ceiling total sequences
features = np.zeros((total_seqs,sequence_length), dtype=int)
next_word = 0
for seq_index in range(total_seqs):
for word_index in range(sequence_length):
features[seq_index, word_index] = words[next_word] if next_word < total_words else 0
next_word += 1
targets = np.array([features[index + 1 if (index + 1 < total_seqs) else index,0] for index in range(total_seqs)])
print("Features shape=",features.shape)
print("targets shape=",targets.shape)
feature_tensors = torch.from_numpy(features)
target_tensors = torch.from_numpy(targets)
data = TensorDataset(feature_tensors, target_tensors)
data_loader = torch.utils.data.DataLoader(data, shuffle=True, batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
Features shape= (10, 5)
targets shape= (10,)
torch.Size([10, 5])
tensor([[ 5, 6, 7, 8, 9],
[ 10, 11, 12, 13, 14],
[ 0, 1, 2, 3, 4],
[ 45, 46, 47, 48, 49],
[ 20, 21, 22, 23, 24],
[ 25, 26, 27, 28, 29],
[ 30, 31, 32, 33, 34],
[ 40, 41, 42, 43, 44],
[ 35, 36, 37, 38, 39],
[ 15, 16, 17, 18, 19]])
torch.Size([10])
tensor([ 10, 15, 5, 45, 25, 30, 35, 45, 40, 20])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define model layers
self.embed = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
self.drop = nn.Dropout(dropout)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
nn_input = nn_input.long()
lstm_output, hidden = self.lstm(self.embed(nn_input), hidden)
lstm_output = lstm_output.contiguous()
lstm_output = lstm_output.view(-1, self.hidden_dim)
output = self.drop(lstm_output)
output = self.fc(output)
# reshape into (batch_size, seq_length, output_size)
output = output.view(batch_size, -1, self.output_size)
# get last batch
output = output[:, -1]
# return one batch of output word scores and the hidden state
return output, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
inputs, targets = inp, target
if(train_on_gpu):
#rnn.cuda()
inputs, targets = inp.cuda(), targets.cuda()
# perform backpropagation and optimization
#grad_clip = 5
hidden = tuple([each.data for each in hidden])
rnn.zero_grad()
output, hidden = rnn(inputs, hidden)
#print("target size =",targets.shape)
#print("target =",targets)
loss = criterion(output, targets.long())
loss.backward()
#nn.utils.clip_grad_norm_(rnn.parameters(), grad_clip)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 50
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)+len(token_dict)+1
# Output size
output_size = vocab_size #batch_size
# Embedding Dimension
embedding_dim = 900
# Hidden Dimension
hidden_dim = 512
# Number of RNN Layers
n_layers = 3
# Show stats for every n number of batches
show_every_n_batches = 70 #500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 50 epoch(s)...
Epoch: 1/50 Loss: 6.516255644389561
Epoch: 1/50 Loss: 5.86168315751212
Epoch: 1/50 Loss: 5.606094251360212
Epoch: 1/50 Loss: 5.437839467184884
Epoch: 1/50 Loss: 5.223972858701433
Epoch: 1/50 Loss: 5.197412736075265
Epoch: 1/50 Loss: 5.101003462927682
Epoch: 1/50 Loss: 5.063916070120675
Epoch: 1/50 Loss: 5.006963586807251
Epoch: 2/50 Loss: 4.9118563532829285
Epoch: 2/50 Loss: 4.82100579398019
Epoch: 2/50 Loss: 4.725071457454137
Epoch: 2/50 Loss: 4.766778802871704
Epoch: 2/50 Loss: 4.721484000342233
Epoch: 2/50 Loss: 4.767718843051365
Epoch: 2/50 Loss: 4.772613341467721
Epoch: 2/50 Loss: 4.691116319383894
Epoch: 2/50 Loss: 4.6837110485349385
Epoch: 3/50 Loss: 4.6390357683686645
Epoch: 3/50 Loss: 4.5129719768251695
Epoch: 3/50 Loss: 4.566531099591937
Epoch: 3/50 Loss: 4.564511694226947
Epoch: 3/50 Loss: 4.495224452018737
Epoch: 3/50 Loss: 4.491748929023743
Epoch: 3/50 Loss: 4.477335340636117
Epoch: 3/50 Loss: 4.43922290120806
Epoch: 3/50 Loss: 4.502414808954511
Epoch: 4/50 Loss: 4.4076981509433075
Epoch: 4/50 Loss: 4.325416319710868
Epoch: 4/50 Loss: 4.294022154808045
Epoch: 4/50 Loss: 4.31599246774401
Epoch: 4/50 Loss: 4.313614668164934
Epoch: 4/50 Loss: 4.349859540803092
Epoch: 4/50 Loss: 4.3211626018796645
Epoch: 4/50 Loss: 4.283437265668597
Epoch: 4/50 Loss: 4.2687146254948205
Epoch: 5/50 Loss: 4.237017291433671
Epoch: 5/50 Loss: 4.154844655309405
Epoch: 5/50 Loss: 4.120998999050685
Epoch: 5/50 Loss: 4.130488848686218
Epoch: 5/50 Loss: 4.189718198776245
Epoch: 5/50 Loss: 4.168764836447579
Epoch: 5/50 Loss: 4.155822706222534
Epoch: 5/50 Loss: 4.238966458184379
Epoch: 5/50 Loss: 4.169158591542925
Epoch: 6/50 Loss: 4.081895240966012
Epoch: 6/50 Loss: 3.9977971349443706
Epoch: 6/50 Loss: 4.022027455057416
Epoch: 6/50 Loss: 4.036387862477984
Epoch: 6/50 Loss: 4.025529558318002
Epoch: 6/50 Loss: 4.009703128678458
Epoch: 6/50 Loss: 4.011410593986511
Epoch: 6/50 Loss: 4.039392553056989
Epoch: 6/50 Loss: 4.0283041681562155
Epoch: 7/50 Loss: 3.9071320558295533
Epoch: 7/50 Loss: 3.853223122869219
Epoch: 7/50 Loss: 3.9101293223244804
Epoch: 7/50 Loss: 3.908613521712167
Epoch: 7/50 Loss: 3.8703491824013847
Epoch: 7/50 Loss: 3.9248798983437676
Epoch: 7/50 Loss: 3.9340441976274763
Epoch: 7/50 Loss: 3.899641040393284
Epoch: 7/50 Loss: 3.9076489550726756
Epoch: 8/50 Loss: 3.798497180728351
Epoch: 8/50 Loss: 3.7275115864617483
Epoch: 8/50 Loss: 3.73186023575919
Epoch: 8/50 Loss: 3.7462349789483205
Epoch: 8/50 Loss: 3.8472062179020474
Epoch: 8/50 Loss: 3.8653225455965314
Epoch: 8/50 Loss: 3.8133172171456473
Epoch: 8/50 Loss: 3.76356018951961
Epoch: 8/50 Loss: 3.7923459018979755
Epoch: 9/50 Loss: 3.6809697922538307
Epoch: 9/50 Loss: 3.619913159097944
Epoch: 9/50 Loss: 3.6286987577165877
Epoch: 9/50 Loss: 3.676372402054923
Epoch: 9/50 Loss: 3.6288090569632394
Epoch: 9/50 Loss: 3.725093626976013
Epoch: 9/50 Loss: 3.6889643941606796
Epoch: 9/50 Loss: 3.6597681965146744
Epoch: 9/50 Loss: 3.723722553253174
Epoch: 10/50 Loss: 3.6088688776773563
Epoch: 10/50 Loss: 3.562079051562718
Epoch: 10/50 Loss: 3.5630919013704574
Epoch: 10/50 Loss: 3.54389466217586
Epoch: 10/50 Loss: 3.609085536003113
Epoch: 10/50 Loss: 3.5493982076644897
Epoch: 10/50 Loss: 3.547520194734846
Epoch: 10/50 Loss: 3.6135632174355643
Epoch: 10/50 Loss: 3.557408319200788
Epoch: 11/50 Loss: 3.4951177782872143
Epoch: 11/50 Loss: 3.460303432600839
Epoch: 11/50 Loss: 3.432663583755493
Epoch: 11/50 Loss: 3.460532181603568
Epoch: 11/50 Loss: 3.494903039932251
Epoch: 11/50 Loss: 3.453531149455479
Epoch: 11/50 Loss: 3.5079573358808247
Epoch: 11/50 Loss: 3.4683453900473458
Epoch: 11/50 Loss: 3.4621589251926967
Epoch: 12/50 Loss: 3.41192371003768
Epoch: 12/50 Loss: 3.30539379460471
Epoch: 12/50 Loss: 3.3282538516180855
Epoch: 12/50 Loss: 3.378475890840803
Epoch: 12/50 Loss: 3.37649895804269
Epoch: 12/50 Loss: 3.443336132594517
Epoch: 12/50 Loss: 3.366505377633231
Epoch: 12/50 Loss: 3.4109419754573276
Epoch: 12/50 Loss: 3.4324763093675887
Epoch: 13/50 Loss: 3.330062797840904
Epoch: 13/50 Loss: 3.207138296536037
Epoch: 13/50 Loss: 3.219822716712952
Epoch: 13/50 Loss: 3.264945374216352
Epoch: 13/50 Loss: 3.2873015778405326
Epoch: 13/50 Loss: 3.2862890788487027
Epoch: 13/50 Loss: 3.3536178827285767
Epoch: 13/50 Loss: 3.34047327041626
Epoch: 13/50 Loss: 3.334790475027902
Epoch: 14/50 Loss: 3.2412937230923595
Epoch: 14/50 Loss: 3.138429239818028
Epoch: 14/50 Loss: 3.1816776309694563
Epoch: 14/50 Loss: 3.189485151427133
Epoch: 14/50 Loss: 3.1695362976619177
Epoch: 14/50 Loss: 3.2195068291255406
Epoch: 14/50 Loss: 3.2253613335745674
Epoch: 14/50 Loss: 3.224652978352138
Epoch: 14/50 Loss: 3.2669188465390886
Epoch: 15/50 Loss: 3.170200326863457
Epoch: 15/50 Loss: 3.0629969188145227
Epoch: 15/50 Loss: 3.065439278738839
Epoch: 15/50 Loss: 3.0849928549357823
Epoch: 15/50 Loss: 3.1504097325461253
Epoch: 15/50 Loss: 3.161905942644392
Epoch: 15/50 Loss: 3.1909539188657488
Epoch: 15/50 Loss: 3.139255288669041
Epoch: 15/50 Loss: 3.1657412937709264
Epoch: 16/50 Loss: 3.0826417873887455
Epoch: 16/50 Loss: 2.998731357710702
Epoch: 16/50 Loss: 3.0324170998164584
Epoch: 16/50 Loss: 3.0150287117276875
Epoch: 16/50 Loss: 3.0433710302625383
Epoch: 16/50 Loss: 3.05235926423754
Epoch: 16/50 Loss: 3.0695734534944807
Epoch: 16/50 Loss: 3.1143928936549594
Epoch: 16/50 Loss: 3.1514809540339876
Epoch: 17/50 Loss: 3.0037588231703816
Epoch: 17/50 Loss: 2.961268677030291
Epoch: 17/50 Loss: 2.9737984146390644
Epoch: 17/50 Loss: 2.9536712646484373
Epoch: 17/50 Loss: 2.957389143535069
Epoch: 17/50 Loss: 2.98039288520813
Epoch: 17/50 Loss: 3.009908495630537
Epoch: 17/50 Loss: 3.0398860590798513
Epoch: 17/50 Loss: 3.043190012659345
Epoch: 18/50 Loss: 2.9358325846054973
Epoch: 18/50 Loss: 2.85151687008994
Epoch: 18/50 Loss: 2.8962000949042186
Epoch: 18/50 Loss: 2.8820817981447493
Epoch: 18/50 Loss: 2.9156526156834195
Epoch: 18/50 Loss: 2.9500359092439923
Epoch: 18/50 Loss: 2.9240504843848094
Epoch: 18/50 Loss: 2.916211932046073
Epoch: 18/50 Loss: 3.00678756577628
Epoch: 19/50 Loss: 2.890795670888003
Epoch: 19/50 Loss: 2.8178381170545306
Epoch: 19/50 Loss: 2.8047392436436245
Epoch: 19/50 Loss: 2.8163665396826607
Epoch: 19/50 Loss: 2.8722220863614765
Epoch: 19/50 Loss: 2.8969897406441825
Epoch: 19/50 Loss: 2.8188178982053484
Epoch: 19/50 Loss: 2.8891391481672013
Epoch: 19/50 Loss: 2.9413866690226964
Epoch: 20/50 Loss: 2.8249790773672214
Epoch: 20/50 Loss: 2.7369032996041436
Epoch: 20/50 Loss: 2.7509697811944145
Epoch: 20/50 Loss: 2.748047545978001
Epoch: 20/50 Loss: 2.7799260139465334
Epoch: 20/50 Loss: 2.848640581539699
Epoch: 20/50 Loss: 2.8492121832711357
Epoch: 20/50 Loss: 2.815487129347665
Epoch: 20/50 Loss: 2.863896700314113
Epoch: 21/50 Loss: 2.7544435893788055
Epoch: 21/50 Loss: 2.68363504750388
Epoch: 21/50 Loss: 2.6715673446655273
Epoch: 21/50 Loss: 2.7067603656223844
Epoch: 21/50 Loss: 2.7482503618512837
Epoch: 21/50 Loss: 2.760567767279489
Epoch: 21/50 Loss: 2.753515352521624
Epoch: 21/50 Loss: 2.7945878744125365
Epoch: 21/50 Loss: 2.7866214888436454
Epoch: 22/50 Loss: 2.708164749776616
Epoch: 22/50 Loss: 2.670598237855094
Epoch: 22/50 Loss: 2.664269505228315
Epoch: 22/50 Loss: 2.6822616338729857
Epoch: 22/50 Loss: 2.7025517327444892
Epoch: 22/50 Loss: 2.6802537986210413
Epoch: 22/50 Loss: 2.7026992525373186
Epoch: 22/50 Loss: 2.75240889276777
Epoch: 22/50 Loss: 2.783597666876657
Epoch: 23/50 Loss: 2.646365043871543
Epoch: 23/50 Loss: 2.558629730769566
Epoch: 23/50 Loss: 2.6195249353136334
Epoch: 23/50 Loss: 2.630699368885585
Epoch: 23/50 Loss: 2.6271732994488306
Epoch: 23/50 Loss: 2.636791239465986
Epoch: 23/50 Loss: 2.6835559913090297
Epoch: 23/50 Loss: 2.650826971871512
Epoch: 23/50 Loss: 2.6798359530312674
Epoch: 24/50 Loss: 2.6136786411790287
Epoch: 24/50 Loss: 2.5360992772238595
Epoch: 24/50 Loss: 2.5446895360946655
Epoch: 24/50 Loss: 2.5672266585486274
Epoch: 24/50 Loss: 2.6107143538338797
Epoch: 24/50 Loss: 2.632528281211853
Epoch: 24/50 Loss: 2.6251633405685424
Epoch: 24/50 Loss: 2.616268335069929
Epoch: 24/50 Loss: 2.6523405790328978
Epoch: 25/50 Loss: 2.5714397798566258
Epoch: 25/50 Loss: 2.4533538137163435
Epoch: 25/50 Loss: 2.51357182094029
Epoch: 25/50 Loss: 2.469906779697963
Epoch: 25/50 Loss: 2.5584763833454676
Epoch: 25/50 Loss: 2.5740460804530554
Epoch: 25/50 Loss: 2.5934419087001257
Epoch: 25/50 Loss: 2.591300926889692
Epoch: 25/50 Loss: 2.6154568433761596
Epoch: 26/50 Loss: 2.52015519843382
Epoch: 26/50 Loss: 2.476429765565055
Epoch: 26/50 Loss: 2.4495496273040773
Epoch: 26/50 Loss: 2.4376726491110667
Epoch: 26/50 Loss: 2.549856812613351
Epoch: 26/50 Loss: 2.509986468723842
Epoch: 26/50 Loss: 2.5428796121052333
Epoch: 26/50 Loss: 2.512659696170262
Epoch: 26/50 Loss: 2.5556471177509854
Epoch: 27/50 Loss: 2.490535054136725
Epoch: 27/50 Loss: 2.3894227947507587
Epoch: 27/50 Loss: 2.426471178872245
Epoch: 27/50 Loss: 2.437322703429631
Epoch: 27/50 Loss: 2.4460490754672457
Epoch: 27/50 Loss: 2.4874588012695313
Epoch: 27/50 Loss: 2.4904942767960683
Epoch: 27/50 Loss: 2.4763526099068778
Epoch: 27/50 Loss: 2.5548475180353436
Epoch: 28/50 Loss: 2.4073703665943706
Epoch: 28/50 Loss: 2.372792223521641
Epoch: 28/50 Loss: 2.4123150587081907
Epoch: 28/50 Loss: 2.3972943578447614
Epoch: 28/50 Loss: 2.40703364440373
Epoch: 28/50 Loss: 2.4301648003714424
Epoch: 28/50 Loss: 2.4399937646729604
Epoch: 28/50 Loss: 2.4954469612666537
Epoch: 28/50 Loss: 2.455768745286124
Epoch: 29/50 Loss: 2.397683172541506
Epoch: 29/50 Loss: 2.3238330466406687
Epoch: 29/50 Loss: 2.363295521054949
Epoch: 29/50 Loss: 2.370130135331835
Epoch: 29/50 Loss: 2.3698825052806307
Epoch: 29/50 Loss: 2.3478225231170655
Epoch: 29/50 Loss: 2.4040791170937674
Epoch: 29/50 Loss: 2.423765894344875
Epoch: 29/50 Loss: 2.4463464055742534
Epoch: 30/50 Loss: 2.360941117300707
Epoch: 30/50 Loss: 2.2716475333486286
Epoch: 30/50 Loss: 2.3091679113251824
Epoch: 30/50 Loss: 2.3252052579607283
Epoch: 30/50 Loss: 2.344332524708339
Epoch: 30/50 Loss: 2.333526756082262
Epoch: 30/50 Loss: 2.403758772781917
Epoch: 30/50 Loss: 2.3961522681372505
Epoch: 30/50 Loss: 2.4205439959253585
Epoch: 31/50 Loss: 2.3118451772367252
Epoch: 31/50 Loss: 2.2471893974712915
Epoch: 31/50 Loss: 2.2583816902978078
Epoch: 31/50 Loss: 2.285195175239018
Epoch: 31/50 Loss: 2.316822532245091
Epoch: 31/50 Loss: 2.3036232488495965
Epoch: 31/50 Loss: 2.350875776154654
Epoch: 31/50 Loss: 2.350268871443612
Epoch: 31/50 Loss: 2.3632922547204154
Epoch: 32/50 Loss: 2.2948824254905476
Epoch: 32/50 Loss: 2.1964548945426943
Epoch: 32/50 Loss: 2.2358834062303816
Epoch: 32/50 Loss: 2.29040002652577
Epoch: 32/50 Loss: 2.269745818206242
Epoch: 32/50 Loss: 2.3250842690467834
Epoch: 32/50 Loss: 2.3078527859279085
Epoch: 32/50 Loss: 2.335926515715463
Epoch: 32/50 Loss: 2.333018832547324
Epoch: 33/50 Loss: 2.2480527539463604
Epoch: 33/50 Loss: 2.195439977305276
Epoch: 33/50 Loss: 2.193083245413644
Epoch: 33/50 Loss: 2.2359470350401742
Epoch: 33/50 Loss: 2.2497006143842424
Epoch: 33/50 Loss: 2.2203555396624974
Epoch: 33/50 Loss: 2.2936851058687484
Epoch: 33/50 Loss: 2.2817083733422416
Epoch: 33/50 Loss: 2.30333331823349
Epoch: 34/50 Loss: 2.212402143899132
Epoch: 34/50 Loss: 2.1554777060236248
Epoch: 34/50 Loss: 2.169258538314274
Epoch: 34/50 Loss: 2.2345673067229135
Epoch: 34/50 Loss: 2.216732505389622
Epoch: 34/50 Loss: 2.2413916860307967
Epoch: 34/50 Loss: 2.268501889705658
Epoch: 34/50 Loss: 2.259492937156132
Epoch: 34/50 Loss: 2.273309544154576
Epoch: 35/50 Loss: 2.193591191488154
Epoch: 35/50 Loss: 2.107555329799652
Epoch: 35/50 Loss: 2.154343914985657
Epoch: 35/50 Loss: 2.177243222509112
Epoch: 35/50 Loss: 2.1875601496015276
Epoch: 35/50 Loss: 2.178527656623295
Epoch: 35/50 Loss: 2.2283037321908132
Epoch: 35/50 Loss: 2.221361233506884
Epoch: 35/50 Loss: 2.215214955806732
Epoch: 36/50 Loss: 2.1895538954173817
Epoch: 36/50 Loss: 2.0828058736664907
Epoch: 36/50 Loss: 2.1126391615186417
Epoch: 36/50 Loss: 2.1754099488258363
Epoch: 36/50 Loss: 2.1345284359795706
Epoch: 36/50 Loss: 2.150758901664189
Epoch: 36/50 Loss: 2.212748488358089
Epoch: 36/50 Loss: 2.1947893227849686
Epoch: 36/50 Loss: 2.2178100058010646
Epoch: 37/50 Loss: 2.14756804003435
Epoch: 37/50 Loss: 2.093848650796073
Epoch: 37/50 Loss: 2.0984077283314297
Epoch: 37/50 Loss: 2.1372158527374268
Epoch: 37/50 Loss: 2.1128433159419466
Epoch: 37/50 Loss: 2.127329615184239
Epoch: 37/50 Loss: 2.1619282279695784
Epoch: 37/50 Loss: 2.198799615246909
Epoch: 37/50 Loss: 2.1974464212145124
Epoch: 38/50 Loss: 2.1070261247017803
Epoch: 38/50 Loss: 2.0460262605122157
Epoch: 38/50 Loss: 2.0947006719452994
Epoch: 38/50 Loss: 2.1200811113630023
Epoch: 38/50 Loss: 2.0880778840609957
Epoch: 38/50 Loss: 2.125659843853542
Epoch: 38/50 Loss: 2.097272608961378
Epoch: 38/50 Loss: 2.1315091473715646
Epoch: 38/50 Loss: 2.161464047431946
Epoch: 39/50 Loss: 2.073323145508766
Epoch: 39/50 Loss: 1.996128533567701
Epoch: 39/50 Loss: 2.0733177168028694
Epoch: 39/50 Loss: 2.053796071665628
Epoch: 39/50 Loss: 2.070925656386784
Epoch: 39/50 Loss: 2.1071492859295438
Epoch: 39/50 Loss: 2.135227647849492
Epoch: 39/50 Loss: 2.1293258445603507
Epoch: 39/50 Loss: 2.1107798116547722
Epoch: 40/50 Loss: 2.079432239427286
Epoch: 40/50 Loss: 2.0094535640307836
Epoch: 40/50 Loss: 1.995458584172385
Epoch: 40/50 Loss: 2.0292436923299517
Epoch: 40/50 Loss: 2.061707774230412
Epoch: 40/50 Loss: 2.0463828495570593
Epoch: 40/50 Loss: 2.05095888376236
Epoch: 40/50 Loss: 2.1283125604901993
Epoch: 40/50 Loss: 2.1141879779951913
Epoch: 41/50 Loss: 2.031243524130653
Epoch: 41/50 Loss: 1.9578698260443552
Epoch: 41/50 Loss: 1.9845003162111554
Epoch: 41/50 Loss: 1.9913522464888436
Epoch: 41/50 Loss: 2.0257262127740043
Epoch: 41/50 Loss: 2.0494320477758134
Epoch: 41/50 Loss: 2.0626709393092564
Epoch: 41/50 Loss: 2.0784010427338737
Epoch: 41/50 Loss: 2.0912981033325195
Epoch: 42/50 Loss: 2.0037383437156677
Epoch: 42/50 Loss: 1.9415868367467608
Epoch: 42/50 Loss: 1.9817078743662153
Epoch: 42/50 Loss: 1.9720311369214738
Epoch: 42/50 Loss: 1.9629854832376752
Epoch: 42/50 Loss: 2.028129836491176
Epoch: 42/50 Loss: 2.0672629714012145
Epoch: 42/50 Loss: 2.0489289709499903
Epoch: 42/50 Loss: 2.083579102584294
Epoch: 43/50 Loss: 1.9823575756129097
Epoch: 43/50 Loss: 1.94189521414893
Epoch: 43/50 Loss: 1.952587684563228
Epoch: 43/50 Loss: 1.9672427126339505
Epoch: 43/50 Loss: 1.9639653512409756
Epoch: 43/50 Loss: 1.9870473487036568
Epoch: 43/50 Loss: 2.040774977207184
Epoch: 43/50 Loss: 2.042361239024571
Epoch: 43/50 Loss: 2.079042250769479
Epoch: 44/50 Loss: 1.9751747408333946
Epoch: 44/50 Loss: 1.9321264488356453
Epoch: 44/50 Loss: 1.9261466554233007
Epoch: 44/50 Loss: 1.9487266302108766
Epoch: 44/50 Loss: 1.9478412406785148
Epoch: 44/50 Loss: 2.0103042040552412
Epoch: 44/50 Loss: 1.9482695136751447
Epoch: 44/50 Loss: 1.9953678011894227
Epoch: 44/50 Loss: 1.980255617414202
Epoch: 45/50 Loss: 1.94035126005902
Epoch: 45/50 Loss: 1.888668635913304
Epoch: 45/50 Loss: 1.9094037822314671
Epoch: 45/50 Loss: 1.9335040722574506
Epoch: 45/50 Loss: 1.9291460343769617
Epoch: 45/50 Loss: 1.978906694480351
Epoch: 45/50 Loss: 1.9601030741419112
Epoch: 45/50 Loss: 1.9625840170042856
Epoch: 45/50 Loss: 2.0294213856969563
Epoch: 46/50 Loss: 1.9146682663875467
Epoch: 46/50 Loss: 1.8680640425000872
Epoch: 46/50 Loss: 1.9046074713979448
Epoch: 46/50 Loss: 1.8957474981035505
Epoch: 46/50 Loss: 1.9411181279591152
Epoch: 46/50 Loss: 1.9756375244685582
Epoch: 46/50 Loss: 1.9517411981310164
Epoch: 46/50 Loss: 1.9813563755580357
Epoch: 46/50 Loss: 1.9896019441740853
Epoch: 47/50 Loss: 1.9192617149914013
Epoch: 47/50 Loss: 1.8236660957336426
Epoch: 47/50 Loss: 1.8716544730322702
Epoch: 47/50 Loss: 1.925037567956107
Epoch: 47/50 Loss: 1.9364051750728062
Epoch: 47/50 Loss: 1.9612844671521867
Epoch: 47/50 Loss: 1.9387702907834734
Epoch: 47/50 Loss: 1.9293441465922765
Epoch: 47/50 Loss: 1.9413581252098084
Epoch: 48/50 Loss: 1.8936279915711458
Epoch: 48/50 Loss: 1.8557534098625184
Epoch: 48/50 Loss: 1.858554037979671
Epoch: 48/50 Loss: 1.8771869897842408
Epoch: 48/50 Loss: 1.8929686750684465
Epoch: 48/50 Loss: 1.9068133916173662
Epoch: 48/50 Loss: 1.910611379146576
Epoch: 48/50 Loss: 1.9103894063404627
Epoch: 48/50 Loss: 1.9444267698696682
Epoch: 49/50 Loss: 1.875596606556107
Epoch: 49/50 Loss: 1.7928036161831447
Epoch: 49/50 Loss: 1.8528374859264918
Epoch: 49/50 Loss: 1.8323426536151342
Epoch: 49/50 Loss: 1.884366135937827
Epoch: 49/50 Loss: 1.8988481266157968
Epoch: 49/50 Loss: 1.8809922899518694
Epoch: 49/50 Loss: 1.9080183369772774
Epoch: 49/50 Loss: 1.9551747270992823
Epoch: 50/50 Loss: 1.87488573438981
Epoch: 50/50 Loss: 1.7990901947021485
Epoch: 50/50 Loss: 1.7799692153930664
Epoch: 50/50 Loss: 1.8696739111627851
Epoch: 50/50 Loss: 1.809240927015032
Epoch: 50/50 Loss: 1.8521770409175329
Epoch: 50/50 Loss: 1.8830704859324865
Epoch: 50/50 Loss: 1.9068744455065045
Epoch: 50/50 Loss: 1.88719231401171
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** 1. Dummy parameters to fine tune the first run: * sequence_length = 20 * batch_size=200 * num_epochs=1 * learning_rate = 0.01 * vocab_size=len(vocab_to_int) * output_size=1 * embedding_dim=15 * hidden_dim=10 * n_layers=22. It kept blowing up until I changed output_size to vocab_size, although this run didn't produce very good results and the loss was high. * sequence_length = 50 * batch_size=128 by sugestions in previous lessons * num_epochs=20 * learning_rate = 0.01 * vocab_size=len(vocab_to_int)+1 * output_size=vocab_size * embedding_dim=400 like in the sentiment rnn * hidden_dim=256 like in sentiment rnn * n_layers=2 like in sentiment rnn3. Great improvements in results and find that loss got near 3.0 by epoch 25 so I stopped there to test results. * sequence_length = 50 * batch_size=128 by sugestions in previous lessons * num_epochs=20 * learning_rate = 0.001 * vocab_size=len(vocab_to_int)+len(token_dict)+1 * output_size=vocab_size * embedding_dim=400 like in the sentiment rnn * hidden_dim=256 like in sentiment rnn * n_layers=3 4. Loss reduced to 2.73, but the first line looks weird, ex. "kramer: carl: stan: chiropractor: cops debby: dated!" * sequence_length = 20 * batch_size=128 by sugestions in previous lessons * num_epochs=30 * learning_rate = 0.001 * vocab_size=len(vocab_to_int)+len(token_dict)+1 * output_size=vocab_size * embedding_dim=400 like in the sentiment rnn * hidden_dim=256 like in sentiment rnn * n_layers=3 5. Loss reduced to 1.89, and the generated text looks much much better. I think I'm done * sequence_length = 10 * batch_size=128 by sugestions in previous lessons * num_epochs=50 * learning_rate = 0.001 * vocab_size=len(vocab_to_int)+len(token_dict)+1 * output_size=vocab_size * embedding_dim=900 * hidden_dim=512 * n_layers=3 --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'kramer' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:39: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_iter5_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
_____no_output_____
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
# return tuple
return (None, None)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
_____no_output_____
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
_____no_output_____
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# return a dataloader
return None
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
_____no_output_____
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
# define model layers
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# return one batch of output word scores and the hidden state
return None, None
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
_____no_output_____
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
# perform backpropagation and optimization
# return the loss over a batch and the hidden state produced by our model
return None, None
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
_____no_output_____
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = # of words in a sequence
# Batch Size
batch_size =
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs =
# Learning Rate
learning_rate =
# Model parameters
# Vocab size
vocab_size =
# Output size
output_size =
# Embedding Dimension
embedding_dim =
# Hidden Dimension
hidden_dim =
# Number of RNN Layers
n_layers =
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def word_counts_sorted(words):
word_count = Counter(words)
word_count_sorted = sorted(word_count.items(), key=lambda t: t[1], reverse=True)
return [w[0] for w in word_count_sorted]
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
words_sorted_by_freq = word_counts_sorted(text)
## Build a dictionary that maps words to integers
vocab_to_int = dict()
i = 0
for w in words_sorted_by_freq:
vocab_to_int[w] = i
i += 1
int_to_vocab = dict((t[1], t[0]) for t in vocab_to_int.items())
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
from string import punctuation
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return {
'.': '||period||',
',': '||comma||',
'"': '||quotation_mark||',
';': '||semicolon||',
'!': '||exclamation_mark||',
'?': '||question_mark||',
'(': '||left_parentheses||',
')': '||right_parentheses||',
'-': '||dash||',
'\n': '||return||'
}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def _generate_batches(words, batch_size, sequence_length):
running_length = 0
batch_x = []
batch_y = []
features = []
targets = []
for i in range(0, len(words)):
end = i + sequence_length
if end <= len(words) - 1:
batch_x.append(words[i: end])
batch_y.append(words[end])
running_length += sequence_length
# Yield a batch and start a new batch
if running_length % (sequence_length * batch_size) == 0:
features.extend(batch_x)
targets.extend(batch_y)
batch_x = []
batch_y = []
running_length = 0
return torch.from_numpy(np.array(features)), torch.from_numpy(np.array(targets))
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# Truncate any extra words so we're able to generate full batches
num_words_full_batches = len(words) - len(words) % (batch_size * sequence_length)
truncated_words = words[:num_words_full_batches]
# Generating batches
feature_tensors, target_tensors = _generate_batches(truncated_words, batch_size, sequence_length)
data = TensorDataset(feature_tensors, target_tensors)
# return a dataloader
return torch.utils.data.DataLoader(data,
shuffle=True,
batch_size=batch_size)
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = list(range(50))
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 14, 15, 16, 17, 18],
[ 5, 6, 7, 8, 9],
[ 21, 22, 23, 24, 25],
[ 0, 1, 2, 3, 4],
[ 37, 38, 39, 40, 41],
[ 16, 17, 18, 19, 20],
[ 10, 11, 12, 13, 14],
[ 1, 2, 3, 4, 5],
[ 38, 39, 40, 41, 42],
[ 17, 18, 19, 20, 21]])
torch.Size([10])
tensor([ 19, 10, 26, 5, 42, 21, 15, 6, 43, 22])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# define all layers
self.embed = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(num_layers=n_layers,
input_size=embedding_dim,
hidden_size=hidden_dim,
dropout=dropout,
batch_first=True)
self.fc = nn.Linear(hidden_dim, output_size)
self.dropout = nn.Dropout(dropout)
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
self.embedding_dim = embedding_dim
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
embed = self.embed(nn_input)
lstm_out, hidden = self.lstm(embed, hidden)
out = self.fc(lstm_out)
# return one batch of output word scores and the hidden state
# Take all batches, the last prediction of each sequence and the full
# output dimension given by self.output_size
return out[:, -1, :], hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
if train_on_gpu:
return (
torch.zeros((self.n_layers, batch_size, self.hidden_dim)).cuda(),
torch.zeros((self.n_layers, batch_size, self.hidden_dim)).cuda()
)
else:
return (
torch.zeros((self.n_layers, batch_size, self.hidden_dim)),
torch.zeros((self.n_layers, batch_size, self.hidden_dim))
)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if train_on_gpu:
rnn.cuda()
inp = inp.cuda()
target = target.cuda()
# perform backpropagation and optimization
hidden = tuple([each.data for each in hidden])
optimizer.zero_grad()
out, hidden = rnn(inp, hidden)
loss = criterion(out, target)
loss.backward()
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 25 # of words in a sequence
# Batch Size
batch_size = 100
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 50
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 300
# Hidden Dimension
hidden_dim = 128
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from workspace_utils import active_session
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
# Have to make it an active session to avoid workspace from disconnecting
with active_session():
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 50 epoch(s)...
Epoch: 1/50 Loss: 5.665978420257568
Epoch: 1/50 Loss: 5.0169265079498295
Epoch: 1/50 Loss: 4.848137440204621
Epoch: 1/50 Loss: 4.682399516105652
Epoch: 1/50 Loss: 4.613057251930237
Epoch: 1/50 Loss: 4.538241837501526
Epoch: 1/50 Loss: 4.478402349948883
Epoch: 1/50 Loss: 4.427395582199097
Epoch: 1/50 Loss: 4.383932968616485
Epoch: 1/50 Loss: 4.367075895309449
Epoch: 1/50 Loss: 4.327775761604309
Epoch: 1/50 Loss: 4.295110981464386
Epoch: 1/50 Loss: 4.274845807075501
Epoch: 1/50 Loss: 4.258869896411896
Epoch: 1/50 Loss: 4.241021359443665
Epoch: 1/50 Loss: 4.238419537067413
Epoch: 1/50 Loss: 4.206550037384033
Epoch: 2/50 Loss: 4.133071528128178
Epoch: 2/50 Loss: 4.072263426303864
Epoch: 2/50 Loss: 4.037250514984131
Epoch: 2/50 Loss: 4.065057234764099
Epoch: 2/50 Loss: 4.014511912822724
Epoch: 2/50 Loss: 4.05269927740097
Epoch: 2/50 Loss: 4.014891406536102
Epoch: 2/50 Loss: 4.021397609233857
Epoch: 2/50 Loss: 4.032319924354553
Epoch: 2/50 Loss: 4.020560225963592
Epoch: 2/50 Loss: 3.999581621170044
Epoch: 2/50 Loss: 4.00228077507019
Epoch: 2/50 Loss: 4.010295090675354
Epoch: 2/50 Loss: 4.016585222721099
Epoch: 2/50 Loss: 4.004399033546448
Epoch: 2/50 Loss: 3.9985734338760377
Epoch: 2/50 Loss: 4.031726098537445
Epoch: 3/50 Loss: 3.936938536180405
Epoch: 3/50 Loss: 3.860738731384277
Epoch: 3/50 Loss: 3.910912787437439
Epoch: 3/50 Loss: 3.855954945087433
Epoch: 3/50 Loss: 3.8701232986450194
Epoch: 3/50 Loss: 3.8690055327415465
Epoch: 3/50 Loss: 3.8590908284187315
Epoch: 3/50 Loss: 3.881987745285034
Epoch: 3/50 Loss: 3.868819770336151
Epoch: 3/50 Loss: 3.913546123981476
Epoch: 3/50 Loss: 3.8865797958374024
Epoch: 3/50 Loss: 3.889278178691864
Epoch: 3/50 Loss: 3.8865809488296508
Epoch: 3/50 Loss: 3.901768548488617
Epoch: 3/50 Loss: 3.8914022479057313
Epoch: 3/50 Loss: 3.8885285482406617
Epoch: 3/50 Loss: 3.8970329217910766
Epoch: 4/50 Loss: 3.820515423648482
Epoch: 4/50 Loss: 3.7834714465141297
Epoch: 4/50 Loss: 3.8036439714431762
Epoch: 4/50 Loss: 3.7685002427101137
Epoch: 4/50 Loss: 3.790644682407379
Epoch: 4/50 Loss: 3.7623613276481627
Epoch: 4/50 Loss: 3.809994971752167
Epoch: 4/50 Loss: 3.799825057506561
Epoch: 4/50 Loss: 3.7838487200737
Epoch: 4/50 Loss: 3.780543348789215
Epoch: 4/50 Loss: 3.815239399433136
Epoch: 4/50 Loss: 3.789267464160919
Epoch: 4/50 Loss: 3.8207636694908143
Epoch: 4/50 Loss: 3.81323193693161
Epoch: 4/50 Loss: 3.8332897081375124
Epoch: 4/50 Loss: 3.8301540651321413
Epoch: 4/50 Loss: 3.8151709957122804
Epoch: 5/50 Loss: 3.7474360824029094
Epoch: 5/50 Loss: 3.719182084083557
Epoch: 5/50 Loss: 3.7361260595321655
Epoch: 5/50 Loss: 3.6891013870239258
Epoch: 5/50 Loss: 3.6923867654800415
Epoch: 5/50 Loss: 3.7283069891929626
Epoch: 5/50 Loss: 3.731324594974518
Epoch: 5/50 Loss: 3.7333002038002014
Epoch: 5/50 Loss: 3.7333198223114015
Epoch: 5/50 Loss: 3.7210554070472717
Epoch: 5/50 Loss: 3.73999888086319
Epoch: 5/50 Loss: 3.7435364503860473
Epoch: 5/50 Loss: 3.7838356423377992
Epoch: 5/50 Loss: 3.757926682472229
Epoch: 5/50 Loss: 3.784348369598389
Epoch: 5/50 Loss: 3.7886485905647276
Epoch: 5/50 Loss: 3.7780577583312986
Epoch: 6/50 Loss: 3.71802559922614
Epoch: 6/50 Loss: 3.671091335296631
Epoch: 6/50 Loss: 3.6540823249816894
Epoch: 6/50 Loss: 3.6713585247993468
Epoch: 6/50 Loss: 3.6624939508438112
Epoch: 6/50 Loss: 3.6760298709869383
Epoch: 6/50 Loss: 3.6858339667320252
Epoch: 6/50 Loss: 3.696845799446106
Epoch: 6/50 Loss: 3.7050984830856324
Epoch: 6/50 Loss: 3.7150109233856203
Epoch: 6/50 Loss: 3.700092813014984
Epoch: 6/50 Loss: 3.7038576493263244
Epoch: 6/50 Loss: 3.7238416905403136
Epoch: 6/50 Loss: 3.7007095093727114
Epoch: 6/50 Loss: 3.7239683785438538
Epoch: 6/50 Loss: 3.7325985765457155
Epoch: 6/50 Loss: 3.725627772808075
Epoch: 7/50 Loss: 3.68162337717941
Epoch: 7/50 Loss: 3.618581917285919
Epoch: 7/50 Loss: 3.628442988872528
Epoch: 7/50 Loss: 3.6433770160675047
Epoch: 7/50 Loss: 3.6348714351654055
Epoch: 7/50 Loss: 3.6389886870384216
Epoch: 7/50 Loss: 3.6667876358032228
Epoch: 7/50 Loss: 3.6417910103797912
Epoch: 7/50 Loss: 3.6606337752342224
Epoch: 7/50 Loss: 3.666503883361816
Epoch: 7/50 Loss: 3.674848457336426
Epoch: 7/50 Loss: 3.66684752702713
Epoch: 7/50 Loss: 3.67286186170578
Epoch: 7/50 Loss: 3.6944026093482973
Epoch: 7/50 Loss: 3.695402255058289
Epoch: 7/50 Loss: 3.707395256519318
Epoch: 7/50 Loss: 3.711767554283142
Epoch: 8/50 Loss: 3.647361376659491
Epoch: 8/50 Loss: 3.5971252851486204
Epoch: 8/50 Loss: 3.5882398381233216
Epoch: 8/50 Loss: 3.5874873690605162
Epoch: 8/50 Loss: 3.6090665702819824
Epoch: 8/50 Loss: 3.6027316541671754
Epoch: 8/50 Loss: 3.6264281067848207
Epoch: 8/50 Loss: 3.6368788180351257
Epoch: 8/50 Loss: 3.6352197766304015
Epoch: 8/50 Loss: 3.6271768832206726
Epoch: 8/50 Loss: 3.6288529925346373
Epoch: 8/50 Loss: 3.669486396312714
Epoch: 8/50 Loss: 3.6805806975364685
Epoch: 8/50 Loss: 3.6501689944267275
Epoch: 8/50 Loss: 3.655951674461365
Epoch: 8/50 Loss: 3.702442397117615
Epoch: 8/50 Loss: 3.6753872385025024
Epoch: 9/50 Loss: 3.6012979908965455
Epoch: 9/50 Loss: 3.5767907948493955
Epoch: 9/50 Loss: 3.574991093158722
Epoch: 9/50 Loss: 3.5742786002159117
Epoch: 9/50 Loss: 3.5751978969573974
Epoch: 9/50 Loss: 3.6051543679237366
Epoch: 9/50 Loss: 3.5989176692962648
Epoch: 9/50 Loss: 3.6182948198318483
Epoch: 9/50 Loss: 3.596789415836334
Epoch: 9/50 Loss: 3.6010834093093873
Epoch: 9/50 Loss: 3.6378290967941282
Epoch: 9/50 Loss: 3.6179610261917112
Epoch: 9/50 Loss: 3.654794400215149
Epoch: 9/50 Loss: 3.6269573378562927
Epoch: 9/50 Loss: 3.6531913776397706
Epoch: 9/50 Loss: 3.6433543601036074
Epoch: 9/50 Loss: 3.666853120803833
Epoch: 10/50 Loss: 3.5737401807931426
Epoch: 10/50 Loss: 3.5578371963500977
Epoch: 10/50 Loss: 3.5428144307136535
Epoch: 10/50 Loss: 3.5437649488449097
Epoch: 10/50 Loss: 3.5719191613197325
Epoch: 10/50 Loss: 3.568979357242584
Epoch: 10/50 Loss: 3.583433915615082
Epoch: 10/50 Loss: 3.582873799800873
Epoch: 10/50 Loss: 3.5797383074760436
Epoch: 10/50 Loss: 3.6051641955375673
Epoch: 10/50 Loss: 3.5982613825798033
Epoch: 10/50 Loss: 3.6116794395446776
Epoch: 10/50 Loss: 3.6152942304611204
Epoch: 10/50 Loss: 3.6286778078079225
Epoch: 10/50 Loss: 3.652248655796051
Epoch: 10/50 Loss: 3.6343882913589476
Epoch: 10/50 Loss: 3.6414488568305967
Epoch: 11/50 Loss: 3.572366839388719
Epoch: 11/50 Loss: 3.532431574344635
Epoch: 11/50 Loss: 3.528107649803162
Epoch: 11/50 Loss: 3.539861231803894
Epoch: 11/50 Loss: 3.5334751214981077
Epoch: 11/50 Loss: 3.5287710666656493
Epoch: 11/50 Loss: 3.55285702085495
Epoch: 11/50 Loss: 3.565562283039093
Epoch: 11/50 Loss: 3.5666876006126405
Epoch: 11/50 Loss: 3.573531065940857
Epoch: 11/50 Loss: 3.5904076161384584
Epoch: 11/50 Loss: 3.6177877674102783
Epoch: 11/50 Loss: 3.5789804401397705
Epoch: 11/50 Loss: 3.622711940765381
Epoch: 11/50 Loss: 3.6155203919410708
Epoch: 11/50 Loss: 3.6008870348930357
Epoch: 11/50 Loss: 3.6378848910331727
Epoch: 12/50 Loss: 3.550605507925965
Epoch: 12/50 Loss: 3.509054733276367
Epoch: 12/50 Loss: 3.5086754894256593
Epoch: 12/50 Loss: 3.5126325273513794
Epoch: 12/50 Loss: 3.547050211429596
Epoch: 12/50 Loss: 3.559761008262634
Epoch: 12/50 Loss: 3.532568666934967
Epoch: 12/50 Loss: 3.5610313258171082
Epoch: 12/50 Loss: 3.5551996307373046
Epoch: 12/50 Loss: 3.564590669631958
Epoch: 12/50 Loss: 3.5497317032814024
Epoch: 12/50 Loss: 3.58452633190155
Epoch: 12/50 Loss: 3.5737342362403868
Epoch: 12/50 Loss: 3.599345323085785
Epoch: 12/50 Loss: 3.570410249233246
Epoch: 12/50 Loss: 3.60769517993927
Epoch: 12/50 Loss: 3.5916245279312133
Epoch: 13/50 Loss: 3.5263778940589066
Epoch: 13/50 Loss: 3.4866138949394228
Epoch: 13/50 Loss: 3.5015867552757265
Epoch: 13/50 Loss: 3.504317081451416
Epoch: 13/50 Loss: 3.5266551666259764
Epoch: 13/50 Loss: 3.4991960911750795
Epoch: 13/50 Loss: 3.547344609260559
Epoch: 13/50 Loss: 3.5538316445350646
Epoch: 13/50 Loss: 3.5248537077903745
Epoch: 13/50 Loss: 3.5384564394950866
Epoch: 13/50 Loss: 3.5632798161506654
Epoch: 13/50 Loss: 3.546135127067566
Epoch: 13/50 Loss: 3.5743777422904968
Epoch: 13/50 Loss: 3.5774786682128905
Epoch: 13/50 Loss: 3.5759130749702455
Epoch: 13/50 Loss: 3.5944788742065428
Epoch: 13/50 Loss: 3.6091462407112123
Epoch: 14/50 Loss: 3.53596449879571
Epoch: 14/50 Loss: 3.4874588069915773
Epoch: 14/50 Loss: 3.4747647271156312
Epoch: 14/50 Loss: 3.488644341945648
Epoch: 14/50 Loss: 3.483150797367096
Epoch: 14/50 Loss: 3.4982693362236024
Epoch: 14/50 Loss: 3.514849506855011
Epoch: 14/50 Loss: 3.5364496116638184
Epoch: 14/50 Loss: 3.5393593997955324
Epoch: 14/50 Loss: 3.537456923007965
Epoch: 14/50 Loss: 3.531864010810852
Epoch: 14/50 Loss: 3.5582237830162047
Epoch: 14/50 Loss: 3.5628165078163145
Epoch: 14/50 Loss: 3.553090453147888
Epoch: 14/50 Loss: 3.5639837021827696
Epoch: 14/50 Loss: 3.5692410712242126
Epoch: 14/50 Loss: 3.58385227060318
Epoch: 15/50 Loss: 3.5146466476368294
Epoch: 15/50 Loss: 3.472718632221222
Epoch: 15/50 Loss: 3.455003091812134
Epoch: 15/50 Loss: 3.486484664440155
Epoch: 15/50 Loss: 3.490838740348816
Epoch: 15/50 Loss: 3.5043194742202757
Epoch: 15/50 Loss: 3.4909178233146667
Epoch: 15/50 Loss: 3.5258136510849
Epoch: 15/50 Loss: 3.5122996978759766
Epoch: 15/50 Loss: 3.5245024442672728
Epoch: 15/50 Loss: 3.5161398115158082
Epoch: 15/50 Loss: 3.557757203578949
Epoch: 15/50 Loss: 3.521192476272583
Epoch: 15/50 Loss: 3.576946927547455
Epoch: 15/50 Loss: 3.556142771720886
Epoch: 15/50 Loss: 3.5786414074897768
Epoch: 15/50 Loss: 3.5669328298568725
Epoch: 16/50 Loss: 3.5173493683404464
Epoch: 16/50 Loss: 3.4612998495101928
Epoch: 16/50 Loss: 3.4822258138656617
Epoch: 16/50 Loss: 3.47709379196167
Epoch: 16/50 Loss: 3.468775255680084
Epoch: 16/50 Loss: 3.494236065387726
Epoch: 16/50 Loss: 3.486546045780182
Epoch: 16/50 Loss: 3.4973472776412966
Epoch: 16/50 Loss: 3.5130592155456544
Epoch: 16/50 Loss: 3.508002788066864
Epoch: 16/50 Loss: 3.521636749267578
Epoch: 16/50 Loss: 3.5167258534431456
Epoch: 16/50 Loss: 3.508400712490082
Epoch: 16/50 Loss: 3.536520465373993
Epoch: 16/50 Loss: 3.5579153928756715
Epoch: 16/50 Loss: 3.556748770236969
Epoch: 16/50 Loss: 3.564290454864502
Epoch: 17/50 Loss: 3.504834327071872
Epoch: 17/50 Loss: 3.4435116424560546
Epoch: 17/50 Loss: 3.4701067690849303
Epoch: 17/50 Loss: 3.465753809928894
Epoch: 17/50 Loss: 3.4586828894615174
Epoch: 17/50 Loss: 3.470047769546509
Epoch: 17/50 Loss: 3.5183913278579713
Epoch: 17/50 Loss: 3.481535150051117
Epoch: 17/50 Loss: 3.505890574455261
Epoch: 17/50 Loss: 3.500228151798248
Epoch: 17/50 Loss: 3.5341903920173645
Epoch: 17/50 Loss: 3.511247148513794
Epoch: 17/50 Loss: 3.5102687582969665
Epoch: 17/50 Loss: 3.5344887800216673
Epoch: 17/50 Loss: 3.5350718722343446
Epoch: 17/50 Loss: 3.5435954203605653
Epoch: 17/50 Loss: 3.5512260165214538
Epoch: 18/50 Loss: 3.4969657536210685
Epoch: 18/50 Loss: 3.4343037805557253
Epoch: 18/50 Loss: 3.4589670886993407
Epoch: 18/50 Loss: 3.426903958320618
Epoch: 18/50 Loss: 3.4694749937057496
Epoch: 18/50 Loss: 3.4789491395950316
Epoch: 18/50 Loss: 3.4861310353279116
Epoch: 18/50 Loss: 3.4753428177833556
Epoch: 18/50 Loss: 3.494155078411102
Epoch: 18/50 Loss: 3.495311378002167
Epoch: 18/50 Loss: 3.4858074102401733
Epoch: 18/50 Loss: 3.5281199479103087
Epoch: 18/50 Loss: 3.5191843485832215
Epoch: 18/50 Loss: 3.536732548713684
Epoch: 18/50 Loss: 3.5391246485710144
Epoch: 18/50 Loss: 3.5482352232933043
Epoch: 18/50 Loss: 3.5482407999038696
Epoch: 19/50 Loss: 3.4856664378597952
Epoch: 19/50 Loss: 3.4369329738616945
Epoch: 19/50 Loss: 3.4337092394828796
Epoch: 19/50 Loss: 3.456515658855438
Epoch: 19/50 Loss: 3.474489695072174
Epoch: 19/50 Loss: 3.4538460121154784
Epoch: 19/50 Loss: 3.4929011044502256
Epoch: 19/50 Loss: 3.4626132864952086
Epoch: 19/50 Loss: 3.4895211181640624
Epoch: 19/50 Loss: 3.5052530069351198
Epoch: 19/50 Loss: 3.499147717952728
Epoch: 19/50 Loss: 3.5201011662483217
Epoch: 19/50 Loss: 3.5147296781539916
Epoch: 19/50 Loss: 3.5303336515426635
Epoch: 19/50 Loss: 3.504849448680878
Epoch: 19/50 Loss: 3.5296687927246095
Epoch: 19/50 Loss: 3.5368871273994444
Epoch: 20/50 Loss: 3.4634800214523467
Epoch: 20/50 Loss: 3.4254993944168093
Epoch: 20/50 Loss: 3.4283327651023865
Epoch: 20/50 Loss: 3.4339080848693846
Epoch: 20/50 Loss: 3.4674853706359863
Epoch: 20/50 Loss: 3.441847899436951
Epoch: 20/50 Loss: 3.466998637676239
Epoch: 20/50 Loss: 3.490763847351074
Epoch: 20/50 Loss: 3.4636169214248658
Epoch: 20/50 Loss: 3.492629252433777
Epoch: 20/50 Loss: 3.487670637130737
Epoch: 20/50 Loss: 3.487798150539398
Epoch: 20/50 Loss: 3.5223937454223635
Epoch: 20/50 Loss: 3.5164123353958128
Epoch: 20/50 Loss: 3.5432417297363283
Epoch: 20/50 Loss: 3.4989303488731385
Epoch: 20/50 Loss: 3.5367606053352354
Epoch: 21/50 Loss: 3.481539615136763
Epoch: 21/50 Loss: 3.4320286607742307
Epoch: 21/50 Loss: 3.4493483858108522
Epoch: 21/50 Loss: 3.4255937213897707
Epoch: 21/50 Loss: 3.424235634326935
Epoch: 21/50 Loss: 3.4326418628692625
Epoch: 21/50 Loss: 3.4564402961730956
Epoch: 21/50 Loss: 3.452912940979004
Epoch: 21/50 Loss: 3.47379363489151
Epoch: 21/50 Loss: 3.4849308700561523
Epoch: 21/50 Loss: 3.488367651939392
Epoch: 21/50 Loss: 3.5065493931770324
Epoch: 21/50 Loss: 3.4910193247795105
Epoch: 21/50 Loss: 3.5322557706832884
Epoch: 21/50 Loss: 3.5088390069007875
Epoch: 21/50 Loss: 3.4959770379066466
Epoch: 21/50 Loss: 3.5131883645057678
Epoch: 22/50 Loss: 3.467194741241659
Epoch: 22/50 Loss: 3.4074358706474306
Epoch: 22/50 Loss: 3.446991012573242
Epoch: 22/50 Loss: 3.4414027194976806
Epoch: 22/50 Loss: 3.4316752276420592
Epoch: 22/50 Loss: 3.4447440924644472
Epoch: 22/50 Loss: 3.4299200592041017
Epoch: 22/50 Loss: 3.4482819442749024
Epoch: 22/50 Loss: 3.45684245634079
Epoch: 22/50 Loss: 3.48458287191391
Epoch: 22/50 Loss: 3.4497800693511964
Epoch: 22/50 Loss: 3.4729273228645323
Epoch: 22/50 Loss: 3.491223934173584
Epoch: 22/50 Loss: 3.5018385586738585
Epoch: 22/50 Loss: 3.5101219458580015
Epoch: 22/50 Loss: 3.5024631695747375
Epoch: 22/50 Loss: 3.5022378697395324
Epoch: 23/50 Loss: 3.449942881591593
Epoch: 23/50 Loss: 3.4050429649353027
Epoch: 23/50 Loss: 3.4246144828796385
Epoch: 23/50 Loss: 3.4011668257713317
Epoch: 23/50 Loss: 3.460912655353546
Epoch: 23/50 Loss: 3.433430491447449
Epoch: 23/50 Loss: 3.41539359998703
Epoch: 23/50 Loss: 3.444561444759369
Epoch: 23/50 Loss: 3.457598660945892
Epoch: 23/50 Loss: 3.4738542895317077
Epoch: 23/50 Loss: 3.4578466987609864
Epoch: 23/50 Loss: 3.481058517456055
Epoch: 23/50 Loss: 3.4806779255867006
Epoch: 23/50 Loss: 3.4673849601745603
Epoch: 23/50 Loss: 3.483893452167511
Epoch: 23/50 Loss: 3.499702574253082
Epoch: 23/50 Loss: 3.509336708068848
Epoch: 24/50 Loss: 3.447741306928692
Epoch: 24/50 Loss: 3.388996314048767
Epoch: 24/50 Loss: 3.4037780513763427
Epoch: 24/50 Loss: 3.4259408955574036
Epoch: 24/50 Loss: 3.4193519945144653
Epoch: 24/50 Loss: 3.4424388389587404
Epoch: 24/50 Loss: 3.425782106399536
Epoch: 24/50 Loss: 3.4528023481369017
Epoch: 24/50 Loss: 3.4385888028144835
Epoch: 24/50 Loss: 3.4520001974105834
Epoch: 24/50 Loss: 3.4663240399360657
Epoch: 24/50 Loss: 3.4584046430587767
Epoch: 24/50 Loss: 3.4756621322631838
Epoch: 24/50 Loss: 3.458785279750824
Epoch: 24/50 Loss: 3.5087973170280455
Epoch: 24/50 Loss: 3.4974903950691223
Epoch: 24/50 Loss: 3.4996256709098814
Epoch: 25/50 Loss: 3.4409126203768245
Epoch: 25/50 Loss: 3.4045880880355837
Epoch: 25/50 Loss: 3.3879650592803956
Epoch: 25/50 Loss: 3.4057058668136597
Epoch: 25/50 Loss: 3.4344066491127014
Epoch: 25/50 Loss: 3.428789680480957
Epoch: 25/50 Loss: 3.4367405586242676
Epoch: 25/50 Loss: 3.431917269706726
Epoch: 25/50 Loss: 3.431472014427185
Epoch: 25/50 Loss: 3.4526203370094297
Epoch: 25/50 Loss: 3.4506980090141295
Epoch: 25/50 Loss: 3.463947217464447
Epoch: 25/50 Loss: 3.47091597032547
Epoch: 25/50 Loss: 3.4849835686683655
Epoch: 25/50 Loss: 3.4908282761573792
Epoch: 25/50 Loss: 3.4707229528427126
Epoch: 25/50 Loss: 3.4920355944633483
Epoch: 26/50 Loss: 3.4291356466503378
Epoch: 26/50 Loss: 3.393321813106537
Epoch: 26/50 Loss: 3.4032944736480712
Epoch: 26/50 Loss: 3.4169971952438356
Epoch: 26/50 Loss: 3.409855649471283
Epoch: 26/50 Loss: 3.4256411895751953
Epoch: 26/50 Loss: 3.408281180858612
Epoch: 26/50 Loss: 3.413796736717224
Epoch: 26/50 Loss: 3.437118050098419
Epoch: 26/50 Loss: 3.444763524055481
Epoch: 26/50 Loss: 3.4659058175086974
Epoch: 26/50 Loss: 3.4697269616127016
Epoch: 26/50 Loss: 3.453572250843048
Epoch: 26/50 Loss: 3.4811980409622194
Epoch: 26/50 Loss: 3.473913890361786
Epoch: 26/50 Loss: 3.4819368462562563
Epoch: 26/50 Loss: 3.507728858947754
Epoch: 27/50 Loss: 3.4229290488034123
Epoch: 27/50 Loss: 3.3815350527763366
Epoch: 27/50 Loss: 3.3882220692634584
Epoch: 27/50 Loss: 3.4153892097473144
Epoch: 27/50 Loss: 3.4003035387992857
Epoch: 27/50 Loss: 3.4159571471214294
Epoch: 27/50 Loss: 3.4230825934410096
Epoch: 27/50 Loss: 3.4361836977005007
Epoch: 27/50 Loss: 3.4516974873542785
Epoch: 27/50 Loss: 3.4236026248931886
Epoch: 27/50 Loss: 3.4673097014427183
Epoch: 27/50 Loss: 3.4413963103294374
Epoch: 27/50 Loss: 3.4566046080589294
Epoch: 27/50 Loss: 3.4574181542396545
Epoch: 27/50 Loss: 3.467266495227814
Epoch: 27/50 Loss: 3.4736825070381165
Epoch: 27/50 Loss: 3.487335155010223
Epoch: 28/50 Loss: 3.431899970047201
Epoch: 28/50 Loss: 3.388866213798523
Epoch: 28/50 Loss: 3.405766252040863
Epoch: 28/50 Loss: 3.4049680228233337
Epoch: 28/50 Loss: 3.409099932670593
Epoch: 28/50 Loss: 3.426530571937561
Epoch: 28/50 Loss: 3.3979711627960203
Epoch: 28/50 Loss: 3.419227174758911
Epoch: 28/50 Loss: 3.433870563983917
Epoch: 28/50 Loss: 3.43893514251709
Epoch: 28/50 Loss: 3.452664625644684
Epoch: 28/50 Loss: 3.4370877785682676
Epoch: 28/50 Loss: 3.4702660803794863
Epoch: 28/50 Loss: 3.479471775531769
Epoch: 28/50 Loss: 3.46084951543808
Epoch: 28/50 Loss: 3.505499315261841
Epoch: 28/50 Loss: 3.474842000007629
Epoch: 29/50 Loss: 3.419697691786939
Epoch: 29/50 Loss: 3.368186450481415
Epoch: 29/50 Loss: 3.4131489787101748
Epoch: 29/50 Loss: 3.3972722849845884
Epoch: 29/50 Loss: 3.395373685836792
Epoch: 29/50 Loss: 3.405561022281647
Epoch: 29/50 Loss: 3.4143018002510073
Epoch: 29/50 Loss: 3.4255487995147704
Epoch: 29/50 Loss: 3.432977017402649
Epoch: 29/50 Loss: 3.432595416545868
Epoch: 29/50 Loss: 3.46607599401474
Epoch: 29/50 Loss: 3.459341167449951
Epoch: 29/50 Loss: 3.4565817108154295
Epoch: 29/50 Loss: 3.4545369091033935
Epoch: 29/50 Loss: 3.4487274169921873
Epoch: 29/50 Loss: 3.4793962984085085
Epoch: 29/50 Loss: 3.4774475412368773
Epoch: 30/50 Loss: 3.4266362702090696
Epoch: 30/50 Loss: 3.3707393894195556
Epoch: 30/50 Loss: 3.4009688816070556
Epoch: 30/50 Loss: 3.41780583524704
Epoch: 30/50 Loss: 3.404806713581085
Epoch: 30/50 Loss: 3.392343162059784
Epoch: 30/50 Loss: 3.4157952556610107
Epoch: 30/50 Loss: 3.394635624885559
Epoch: 30/50 Loss: 3.4278540353775027
Epoch: 30/50 Loss: 3.445987799167633
Epoch: 30/50 Loss: 3.4455573353767397
Epoch: 30/50 Loss: 3.458224308013916
Epoch: 30/50 Loss: 3.465588881969452
Epoch: 30/50 Loss: 3.4900054316520692
Epoch: 30/50 Loss: 3.4472007632255552
Epoch: 30/50 Loss: 3.460874794483185
Epoch: 30/50 Loss: 3.466542109012604
Epoch: 31/50 Loss: 3.4146132845767214
Epoch: 31/50 Loss: 3.3843071036338808
Epoch: 31/50 Loss: 3.3811347498893736
Epoch: 31/50 Loss: 3.4160743136405944
Epoch: 31/50 Loss: 3.3904830989837644
Epoch: 31/50 Loss: 3.4081887464523315
Epoch: 31/50 Loss: 3.417015996456146
Epoch: 31/50 Loss: 3.4328133087158204
Epoch: 31/50 Loss: 3.4275679950714113
Epoch: 31/50 Loss: 3.4138888907432556
Epoch: 31/50 Loss: 3.4186256895065306
Epoch: 31/50 Loss: 3.4532382264137267
Epoch: 31/50 Loss: 3.4611018452644347
Epoch: 31/50 Loss: 3.442553873062134
Epoch: 31/50 Loss: 3.4648084006309507
Epoch: 31/50 Loss: 3.4831327900886535
Epoch: 31/50 Loss: 3.481344985485077
Epoch: 32/50 Loss: 3.417527911395199
Epoch: 32/50 Loss: 3.3604442677497866
Epoch: 32/50 Loss: 3.3931580324172974
Epoch: 32/50 Loss: 3.4039223589897154
Epoch: 32/50 Loss: 3.4109639616012575
Epoch: 32/50 Loss: 3.38839937210083
Epoch: 32/50 Loss: 3.4070618963241577
Epoch: 32/50 Loss: 3.4343809013366697
Epoch: 32/50 Loss: 3.4400498456954955
Epoch: 32/50 Loss: 3.44074022769928
Epoch: 32/50 Loss: 3.4361036224365233
Epoch: 32/50 Loss: 3.4338715047836303
Epoch: 32/50 Loss: 3.4311938066482544
Epoch: 32/50 Loss: 3.4557366423606872
Epoch: 32/50 Loss: 3.452679683685303
Epoch: 32/50 Loss: 3.4605790557861327
Epoch: 32/50 Loss: 3.488764456272125
Epoch: 33/50 Loss: 3.4177266196228637
Epoch: 33/50 Loss: 3.3783233346939086
Epoch: 33/50 Loss: 3.3802886810302732
Epoch: 33/50 Loss: 3.3948074803352357
Epoch: 33/50 Loss: 3.3951561312675476
Epoch: 33/50 Loss: 3.3925180435180664
Epoch: 33/50 Loss: 3.4180615520477295
Epoch: 33/50 Loss: 3.4182900276184083
Epoch: 33/50 Loss: 3.406918329238892
Epoch: 33/50 Loss: 3.4217761926651002
Epoch: 33/50 Loss: 3.432539189338684
Epoch: 33/50 Loss: 3.4301042923927305
Epoch: 33/50 Loss: 3.437365408420563
Epoch: 33/50 Loss: 3.4685841836929323
Epoch: 33/50 Loss: 3.4410782403945923
Epoch: 33/50 Loss: 3.4405807752609254
Epoch: 33/50 Loss: 3.4609785046577453
Epoch: 34/50 Loss: 3.3981646825792526
Epoch: 34/50 Loss: 3.366820156097412
Epoch: 34/50 Loss: 3.371670606613159
Epoch: 34/50 Loss: 3.371401391029358
Epoch: 34/50 Loss: 3.3988847618103026
Epoch: 34/50 Loss: 3.3956813430786132
Epoch: 34/50 Loss: 3.395632528305054
Epoch: 34/50 Loss: 3.408763723373413
Epoch: 34/50 Loss: 3.4014831314086913
Epoch: 34/50 Loss: 3.4170239338874815
Epoch: 34/50 Loss: 3.4300766253471373
Epoch: 34/50 Loss: 3.440293403148651
Epoch: 34/50 Loss: 3.4568836894035337
Epoch: 34/50 Loss: 3.4525123081207276
Epoch: 34/50 Loss: 3.4428206429481505
Epoch: 34/50 Loss: 3.4623633823394777
Epoch: 34/50 Loss: 3.459962866783142
Epoch: 35/50 Loss: 3.397423174808235
Epoch: 35/50 Loss: 3.3756671342849733
Epoch: 35/50 Loss: 3.3613001890182495
Epoch: 35/50 Loss: 3.3961171474456786
Epoch: 35/50 Loss: 3.372838927268982
Epoch: 35/50 Loss: 3.403430688381195
Epoch: 35/50 Loss: 3.381500761985779
Epoch: 35/50 Loss: 3.4086303448677064
Epoch: 35/50 Loss: 3.4180406122207643
Epoch: 35/50 Loss: 3.4112331962585447
Epoch: 35/50 Loss: 3.4186969509124756
Epoch: 35/50 Loss: 3.423111517906189
Epoch: 35/50 Loss: 3.4279802737236023
Epoch: 35/50 Loss: 3.4619605579376223
Epoch: 35/50 Loss: 3.4196023092269896
Epoch: 35/50 Loss: 3.435082646369934
Epoch: 35/50 Loss: 3.46928052854538
Epoch: 36/50 Loss: 3.4028661094597634
Epoch: 36/50 Loss: 3.3608404269218446
Epoch: 36/50 Loss: 3.3967194023132325
Epoch: 36/50 Loss: 3.3871848163604734
Epoch: 36/50 Loss: 3.3849933977127074
Epoch: 36/50 Loss: 3.3803206753730772
Epoch: 36/50 Loss: 3.3983873620033265
Epoch: 36/50 Loss: 3.395652235984802
Epoch: 36/50 Loss: 3.3969650616645812
Epoch: 36/50 Loss: 3.397954665184021
Epoch: 36/50 Loss: 3.431672503948212
Epoch: 36/50 Loss: 3.4284891366958616
Epoch: 36/50 Loss: 3.4296383051872255
Epoch: 36/50 Loss: 3.4335634427070616
Epoch: 36/50 Loss: 3.4464554619789123
Epoch: 36/50 Loss: 3.4287529768943785
Epoch: 36/50 Loss: 3.4475033655166625
Epoch: 37/50 Loss: 3.397083348771754
Epoch: 37/50 Loss: 3.3667666602134703
Epoch: 37/50 Loss: 3.3691899824142455
Epoch: 37/50 Loss: 3.3572460641860964
Epoch: 37/50 Loss: 3.3613343472480772
Epoch: 37/50 Loss: 3.3875691280364992
Epoch: 37/50 Loss: 3.3876871938705446
Epoch: 37/50 Loss: 3.391826536655426
Epoch: 37/50 Loss: 3.416728828430176
Epoch: 37/50 Loss: 3.4138199238777163
Epoch: 37/50 Loss: 3.4239575824737547
Epoch: 37/50 Loss: 3.4176981420516968
Epoch: 37/50 Loss: 3.416066034793854
Epoch: 37/50 Loss: 3.4216342487335205
Epoch: 37/50 Loss: 3.4356573185920714
Epoch: 37/50 Loss: 3.442072382926941
Epoch: 37/50 Loss: 3.4289157180786134
Epoch: 38/50 Loss: 3.382192850112915
Epoch: 38/50 Loss: 3.349146860122681
Epoch: 38/50 Loss: 3.3575784516334535
Epoch: 38/50 Loss: 3.3621047854423525
Epoch: 38/50 Loss: 3.371451317310333
Epoch: 38/50 Loss: 3.366515027523041
Epoch: 38/50 Loss: 3.3611457962989806
Epoch: 38/50 Loss: 3.39907656955719
Epoch: 38/50 Loss: 3.407413489818573
Epoch: 38/50 Loss: 3.4081104788780214
Epoch: 38/50 Loss: 3.4209529714584352
Epoch: 38/50 Loss: 3.4033219838142394
Epoch: 38/50 Loss: 3.424390582084656
Epoch: 38/50 Loss: 3.415353960514069
Epoch: 38/50 Loss: 3.4466467475891114
Epoch: 38/50 Loss: 3.455953017234802
Epoch: 38/50 Loss: 3.4613447947502136
Epoch: 39/50 Loss: 3.395313024520874
Epoch: 39/50 Loss: 3.373696825504303
Epoch: 39/50 Loss: 3.347460768699646
Epoch: 39/50 Loss: 3.348865128993988
Epoch: 39/50 Loss: 3.400273462772369
Epoch: 39/50 Loss: 3.3729868106842043
Epoch: 39/50 Loss: 3.3671986756324768
Epoch: 39/50 Loss: 3.3841803684234617
Epoch: 39/50 Loss: 3.3989872164726256
Epoch: 39/50 Loss: 3.396654706478119
Epoch: 39/50 Loss: 3.425925054550171
Epoch: 39/50 Loss: 3.3959758439064025
Epoch: 39/50 Loss: 3.4115366926193236
Epoch: 39/50 Loss: 3.4170353808403013
Epoch: 39/50 Loss: 3.412957437992096
Epoch: 39/50 Loss: 3.438809636116028
Epoch: 39/50 Loss: 3.443488332748413
Epoch: 40/50 Loss: 3.3813527391537677
Epoch: 40/50 Loss: 3.339853758811951
Epoch: 40/50 Loss: 3.3446488552093507
Epoch: 40/50 Loss: 3.362490334033966
Epoch: 40/50 Loss: 3.3678396859169006
Epoch: 40/50 Loss: 3.3644868097305296
Epoch: 40/50 Loss: 3.3745603289604187
Epoch: 40/50 Loss: 3.400062577724457
Epoch: 40/50 Loss: 3.4006435918807982
Epoch: 40/50 Loss: 3.381911808490753
Epoch: 40/50 Loss: 3.40930841588974
Epoch: 40/50 Loss: 3.3866227765083314
Epoch: 40/50 Loss: 3.4094908175468444
Epoch: 40/50 Loss: 3.4229305148124696
Epoch: 40/50 Loss: 3.4151808800697325
Epoch: 40/50 Loss: 3.4340865106582643
Epoch: 40/50 Loss: 3.428878613471985
Epoch: 41/50 Loss: 3.392590676054143
Epoch: 41/50 Loss: 3.3408016810417176
Epoch: 41/50 Loss: 3.337930257320404
Epoch: 41/50 Loss: 3.3600716986656187
Epoch: 41/50 Loss: 3.352771768569946
Epoch: 41/50 Loss: 3.3762117352485657
Epoch: 41/50 Loss: 3.370789174079895
Epoch: 41/50 Loss: 3.390468698501587
Epoch: 41/50 Loss: 3.398447720527649
Epoch: 41/50 Loss: 3.3988220286369324
Epoch: 41/50 Loss: 3.383274136543274
Epoch: 41/50 Loss: 3.4180361166000366
Epoch: 41/50 Loss: 3.4071858429908755
Epoch: 41/50 Loss: 3.419243507862091
Epoch: 41/50 Loss: 3.414725594520569
Epoch: 41/50 Loss: 3.4135225768089295
Epoch: 41/50 Loss: 3.426509735107422
Epoch: 42/50 Loss: 3.383590370714996
Epoch: 42/50 Loss: 3.3190050106048585
Epoch: 42/50 Loss: 3.3431287603378297
Epoch: 42/50 Loss: 3.336298952102661
Epoch: 42/50 Loss: 3.368992443561554
Epoch: 42/50 Loss: 3.3524561071395875
Epoch: 42/50 Loss: 3.350354950428009
Epoch: 42/50 Loss: 3.3876103892326355
Epoch: 42/50 Loss: 3.3888609623908996
Epoch: 42/50 Loss: 3.3969317746162413
Epoch: 42/50 Loss: 3.4134011268615723
Epoch: 42/50 Loss: 3.388237858772278
Epoch: 42/50 Loss: 3.4020225291252135
Epoch: 42/50 Loss: 3.4219931087493896
Epoch: 42/50 Loss: 3.4083484740257264
Epoch: 42/50 Loss: 3.4148167963027953
Epoch: 42/50 Loss: 3.4425931930541993
Epoch: 43/50 Loss: 3.375084323267783
Epoch: 43/50 Loss: 3.3462657155990603
Epoch: 43/50 Loss: 3.3344435782432558
Epoch: 43/50 Loss: 3.3375595083236695
Epoch: 43/50 Loss: 3.3535454030036926
Epoch: 43/50 Loss: 3.32906648349762
Epoch: 43/50 Loss: 3.367526692867279
Epoch: 43/50 Loss: 3.3691976437568663
Epoch: 43/50 Loss: 3.363544155597687
Epoch: 43/50 Loss: 3.3878619804382324
Epoch: 43/50 Loss: 3.4096454582214357
Epoch: 43/50 Loss: 3.389637119293213
Epoch: 43/50 Loss: 3.4163802700042725
Epoch: 43/50 Loss: 3.4258987402915952
Epoch: 43/50 Loss: 3.415721211910248
Epoch: 43/50 Loss: 3.4342961072921754
Epoch: 43/50 Loss: 3.422231529712677
Epoch: 44/50 Loss: 3.369406045079894
Epoch: 44/50 Loss: 3.3351856956481933
Epoch: 44/50 Loss: 3.3403619170188903
Epoch: 44/50 Loss: 3.335961977958679
Epoch: 44/50 Loss: 3.3703363060951235
Epoch: 44/50 Loss: 3.355019334793091
Epoch: 44/50 Loss: 3.3520416798591612
Epoch: 44/50 Loss: 3.3691199617385865
Epoch: 44/50 Loss: 3.3676410040855407
Epoch: 44/50 Loss: 3.3884952330589293
Epoch: 44/50 Loss: 3.3814213576316834
Epoch: 44/50 Loss: 3.38624174451828
Epoch: 44/50 Loss: 3.398065673351288
Epoch: 44/50 Loss: 3.4117964053153993
Epoch: 44/50 Loss: 3.419586715221405
Epoch: 44/50 Loss: 3.4147786269187925
Epoch: 44/50 Loss: 3.408930099487305
Epoch: 45/50 Loss: 3.3592054422227906
Epoch: 45/50 Loss: 3.3144048976898195
Epoch: 45/50 Loss: 3.3317026901245117
Epoch: 45/50 Loss: 3.3291167817115785
Epoch: 45/50 Loss: 3.3502228317260743
Epoch: 45/50 Loss: 3.340053526878357
Epoch: 45/50 Loss: 3.379807912349701
Epoch: 45/50 Loss: 3.3535975847244264
Epoch: 45/50 Loss: 3.3719024777412416
Epoch: 45/50 Loss: 3.387469253540039
Epoch: 45/50 Loss: 3.380273631095886
Epoch: 45/50 Loss: 3.3840805983543394
Epoch: 45/50 Loss: 3.403504664897919
Epoch: 45/50 Loss: 3.4113366041183473
Epoch: 45/50 Loss: 3.4054740080833437
Epoch: 45/50 Loss: 3.4200008754730225
Epoch: 45/50 Loss: 3.411993911743164
Epoch: 46/50 Loss: 3.354236131250129
Epoch: 46/50 Loss: 3.3408305644989014
Epoch: 46/50 Loss: 3.333773087501526
Epoch: 46/50 Loss: 3.3308417053222654
Epoch: 46/50 Loss: 3.334670443534851
Epoch: 46/50 Loss: 3.3266123671531678
Epoch: 46/50 Loss: 3.360192120552063
Epoch: 46/50 Loss: 3.3677366285324095
Epoch: 46/50 Loss: 3.381415358543396
Epoch: 46/50 Loss: 3.3835021505355836
Epoch: 46/50 Loss: 3.384021110057831
Epoch: 46/50 Loss: 3.3823189463615417
Epoch: 46/50 Loss: 3.3915177450180054
Epoch: 46/50 Loss: 3.384876751422882
Epoch: 46/50 Loss: 3.408671064853668
Epoch: 46/50 Loss: 3.4141172165870666
Epoch: 46/50 Loss: 3.4077017397880556
Epoch: 47/50 Loss: 3.344172126326598
Epoch: 47/50 Loss: 3.310088569164276
Epoch: 47/50 Loss: 3.3360726857185363
Epoch: 47/50 Loss: 3.3217876324653623
Epoch: 47/50 Loss: 3.346633759021759
Epoch: 47/50 Loss: 3.373337470531464
Epoch: 47/50 Loss: 3.349922017097473
Epoch: 47/50 Loss: 3.359698698043823
Epoch: 47/50 Loss: 3.3661910076141357
Epoch: 47/50 Loss: 3.3408558382987974
Epoch: 47/50 Loss: 3.378193076133728
Epoch: 47/50 Loss: 3.365145844459534
Epoch: 47/50 Loss: 3.406579786300659
Epoch: 47/50 Loss: 3.4001359767913817
Epoch: 47/50 Loss: 3.4399686260223388
Epoch: 47/50 Loss: 3.420889828681946
Epoch: 47/50 Loss: 3.4165014286041258
Epoch: 48/50 Loss: 3.347930146536652
Epoch: 48/50 Loss: 3.306180275440216
Epoch: 48/50 Loss: 3.324816905498505
Epoch: 48/50 Loss: 3.322040863037109
Epoch: 48/50 Loss: 3.3335712008476257
Epoch: 48/50 Loss: 3.346799533367157
Epoch: 48/50 Loss: 3.3487579379081724
Epoch: 48/50 Loss: 3.3628708639144897
Epoch: 48/50 Loss: 3.384112669467926
Epoch: 48/50 Loss: 3.362531897544861
Epoch: 48/50 Loss: 3.3722641491889953
Epoch: 48/50 Loss: 3.375770344734192
Epoch: 48/50 Loss: 3.379248173713684
Epoch: 48/50 Loss: 3.40091357421875
Epoch: 48/50 Loss: 3.386629187583923
Epoch: 48/50 Loss: 3.399526786327362
Epoch: 48/50 Loss: 3.4052980132102966
Epoch: 49/50 Loss: 3.3471550594049777
Epoch: 49/50 Loss: 3.30485281085968
Epoch: 49/50 Loss: 3.3259602394104
Epoch: 49/50 Loss: 3.324065264225006
Epoch: 49/50 Loss: 3.365146909236908
Epoch: 49/50 Loss: 3.341756565093994
Epoch: 49/50 Loss: 3.3295235414505004
Epoch: 49/50 Loss: 3.332947335243225
Epoch: 49/50 Loss: 3.348553421974182
Epoch: 49/50 Loss: 3.3743089671134947
Epoch: 49/50 Loss: 3.396255979061127
Epoch: 49/50 Loss: 3.377211248397827
Epoch: 49/50 Loss: 3.3653731598854066
Epoch: 49/50 Loss: 3.369532114982605
Epoch: 49/50 Loss: 3.4002244758605955
Epoch: 49/50 Loss: 3.4094139432907102
Epoch: 49/50 Loss: 3.3950993003845213
Epoch: 50/50 Loss: 3.3460027946115733
Epoch: 50/50 Loss: 3.327750300884247
Epoch: 50/50 Loss: 3.3263371419906616
Epoch: 50/50 Loss: 3.3164320998191834
Epoch: 50/50 Loss: 3.314189978122711
Epoch: 50/50 Loss: 3.3356665539741517
Epoch: 50/50 Loss: 3.327284802913666
Epoch: 50/50 Loss: 3.369710102558136
Epoch: 50/50 Loss: 3.372405698776245
Epoch: 50/50 Loss: 3.3740906744003296
Epoch: 50/50 Loss: 3.3629390926361085
Epoch: 50/50 Loss: 3.3796582608222963
Epoch: 50/50 Loss: 3.3729727053642273
Epoch: 50/50 Loss: 3.374483947753906
Epoch: 50/50 Loss: 3.38959716463089
Epoch: 50/50 Loss: 3.4134062294960024
Epoch: 50/50 Loss: 3.4088366594314574
Model Trained and Saved
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here)The first attempt I tried was to use a sequence length of 100, with batches of 50 and a LSTM dimension of 128 (2 layers). This resulted in the training process being fairly slow and also the loss fluctuating quite a bit and struggling to reduce beyond say 3.7 loss. I thought increasing the sequence length to 200 and the size of the LSTM unit to 256 would help but it only resulted in making the training process slower and didn't help much with the fluctuating loss.Taking a step back, and looking at the nature of this dataset, there was a clear possible reason for this. The data is that of a TV script that is mostly dialogue, so each contiguous piece of text is a sentence or two at most, and fairly short. Thus, a long sequence length doesn't necessarily help, but could actually hurt the learning process. It also makes the learning process slower as there's no parallelism involved in doing a forward pass and backpropagation in any given sequence, but the parallelisation is done across different batches, where each batch can be processed independently, taking advantage of the GPUs. Larger LSTM dimension of 256 didn't help either.Reverting back to 128 LSTM dimension and reducing the sequence length drastically from 200 to only 25 words, while increasing the batch size to 100 yielded significant improvement, with the training process going much faster, most likely due to increased parallelism and shorter sequences. The training process also started looking significantly healthier, with the loss fluctuating less as it trained through the epochs.The number of epochs tried initially was 10, but that only brought the loss down to about 3.55, but it was clear that the loss can drop further with more epochs, so this was increased to 30, and then to 50. With 50 epochs, we got to a loss of about 3.35 on the final epoch.Embedding dimension of 300 seemed to work well, so I didn't change that parameter much.Learning rate was set to 0.001 and that worked well with some fluctuation in the loss but no massive divergences.So the final model configuration was- 50 epochs- 128 LSTM dimension- 2 hidden layers- 300 embedding dimension- 0.001 learning rate- 25 sequence length- 100 batch sizeresulting in a loss of around 3.35 for the final training epoch. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output.cpu(), dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
import numpy as np
import torch
# run the cell multiple times to get different results!
gen_length = 500 # modify the length to your preference
prime_word = 'kramer' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:44: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Keeping your connection alive during long processesIn a local environment, do not run the following script
###Code
import signal
from contextlib import contextmanager
import requests
DELAY = INTERVAL = 4 * 60 # interval time in seconds
MIN_DELAY = MIN_INTERVAL = 2 * 60
KEEPALIVE_URL = "https://nebula.udacity.com/api/v1/remote/keep-alive"
TOKEN_URL = "http://metadata.google.internal/computeMetadata/v1/instance/attributes/keep_alive_token"
TOKEN_HEADERS = {"Metadata-Flavor":"Google"}
def _request_handler(headers):
def _handler(signum, frame):
requests.request("POST", KEEPALIVE_URL, headers=headers)
return _handler
@contextmanager
def active_session(delay=DELAY, interval=INTERVAL):
"""
Example:
from workspace_utils import active session
with active_session():
# do long-running work here
"""
token = requests.request("GET", TOKEN_URL, headers=TOKEN_HEADERS).text
headers = {'Authorization': "STAR " + token}
delay = max(delay, MIN_DELAY)
interval = max(interval, MIN_INTERVAL)
original_handler = signal.getsignal(signal.SIGALRM)
try:
signal.signal(signal.SIGALRM, _request_handler(headers))
signal.setitimer(signal.ITIMER_REAL, delay, interval)
yield
finally:
signal.signal(signal.SIGALRM, original_handler)
signal.setitimer(signal.ITIMER_REAL, 0)
def keep_awake(iterable, delay=DELAY, interval=INTERVAL):
"""
Example:
from workspace_utils import keep_awake
for i in keep_awake(range(5)):
# do iteration with lots of work here
"""
with active_session(delay, interval): yield from iterable
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
vocab_to_int = {}
int_to_vocab = {}
words = set(text)
for ii, word in enumerate(words):
vocab_to_int[word]=ii
int_to_vocab[ii]=word
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
Tokenized_dictionary = {".":"||Period||"
,",":"||Comma||"
,"\"":"||Quotation_Mark||"
,";":"||Semicolon"
,"!":"||Exclamation_mark||"
,"?":"||Question_mark||"
,"(":"||Left_Parentheses||"
,")":"||Right_Parentheses||"
,"-":"||Dash||"
,"\n":"||Return||"}
return Tokenized_dictionary
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
features = []
targets = []
n_batches = len(words)//batch_size
words = words[:n_batches*batch_size]
for idx_start in range(0,len(words)-sequence_length):
idx_end = idx_start + sequence_length
feature_tensor = words[idx_start:idx_end]
features.append(feature_tensor)
target_tensor = words[idx_end]
targets.append(target_tensor)
data = TensorDataset(torch.from_numpy(np.asarray(features)), torch.from_numpy(np.asarray(targets)))
dataloader = DataLoader(data, batch_size=batch_size)
# return a dataloader
return dataloader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]])
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define model layers
self.embedding = nn.Embedding(num_embeddings = vocab_size,
embedding_dim = embedding_dim)
self.lstm = nn.LSTM(input_size =embedding_dim,
hidden_size = hidden_dim,
num_layers = n_layers,
batch_first = True,
dropout = dropout
)
# linear layer
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
batch_size = nn_input.size(0)
# embeddings and lstm_out
embeds = self.embedding(nn_input)
lstm_output, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
output = lstm_output.contiguous().view(-1, self.hidden_dim)
# fully-connected layer
output = self.fc(output)
output = output.view(batch_size, -1, self.output_size)
output = output[:, -1]
# return one batch of output word scores and the hidden state
return output, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
# initialize hidden state with zero weights, and move to GPU if available
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# move data to GPU, if available
if(train_on_gpu):
rnn = rnn.cuda()
inputs, target = inp.cuda(), target.cuda()
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
h = tuple([each.data for each in hidden])
# zero accumulated gradients
optimizer.zero_grad()
# get the output from the model
output, h = rnn(inputs, h)
# calculate the loss and perform backprop
loss = criterion(output, target)
loss.backward()
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), h
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 16
# Batch Size
batch_size = 256
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 400
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
with active_session():
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 5.522952121734619
Epoch: 1/10 Loss: 4.754759871959687
Epoch: 1/10 Loss: 4.656247469425201
Epoch: 1/10 Loss: 4.48144774723053
Epoch: 1/10 Loss: 4.354819772243499
Epoch: 1/10 Loss: 4.420807358264923
Epoch: 2/10 Loss: 4.243906981583651
Epoch: 2/10 Loss: 3.9680150237083436
Epoch: 2/10 Loss: 4.038862237930298
Epoch: 2/10 Loss: 3.9595134258270264
Epoch: 2/10 Loss: 3.9117071013450624
Epoch: 2/10 Loss: 4.012768744945526
Epoch: 3/10 Loss: 3.9257868532004214
Epoch: 3/10 Loss: 3.7406112899780273
Epoch: 3/10 Loss: 3.8140684480667115
Epoch: 3/10 Loss: 3.7535563635826112
Epoch: 3/10 Loss: 3.699676116943359
Epoch: 3/10 Loss: 3.807193706035614
Epoch: 4/10 Loss: 3.7294869781874445
Epoch: 4/10 Loss: 3.581225072860718
Epoch: 4/10 Loss: 3.6560447015762327
Epoch: 4/10 Loss: 3.6053649821281435
Epoch: 4/10 Loss: 3.547518718242645
Epoch: 4/10 Loss: 3.6681784377098086
Epoch: 5/10 Loss: 3.5952676674114237
Epoch: 5/10 Loss: 3.46596888589859
Epoch: 5/10 Loss: 3.5328338894844054
Epoch: 5/10 Loss: 3.5102869787216187
Epoch: 5/10 Loss: 3.4422434101104735
Epoch: 5/10 Loss: 3.5584872608184814
Epoch: 6/10 Loss: 3.4939115142046133
Epoch: 6/10 Loss: 3.376741497993469
Epoch: 6/10 Loss: 3.442264214038849
Epoch: 6/10 Loss: 3.423407745838165
Epoch: 6/10 Loss: 3.351679904937744
Epoch: 6/10 Loss: 3.4829579901695253
Epoch: 7/10 Loss: 3.4114400042886044
Epoch: 7/10 Loss: 3.302019403934479
Epoch: 7/10 Loss: 3.357560217857361
Epoch: 7/10 Loss: 3.349112678527832
Epoch: 7/10 Loss: 3.282305521965027
Epoch: 7/10 Loss: 3.4118709349632264
Epoch: 8/10 Loss: 3.350179165571413
Epoch: 8/10 Loss: 3.2463563504219057
Epoch: 8/10 Loss: 3.2926317286491393
Epoch: 8/10 Loss: 3.2892961072921754
Epoch: 8/10 Loss: 3.222636125087738
Epoch: 8/10 Loss: 3.3476591720581053
Epoch: 9/10 Loss: 3.293566507337537
Epoch: 9/10 Loss: 3.1977349667549135
Epoch: 9/10 Loss: 3.2368158712387083
Epoch: 9/10 Loss: 3.241262062072754
Epoch: 9/10 Loss: 3.167059338092804
Epoch: 9/10 Loss: 3.2985173225402833
Epoch: 10/10 Loss: 3.2449350970686393
Epoch: 10/10 Loss: 3.1553981909751894
Epoch: 10/10 Loss: 3.1925506958961485
Epoch: 10/10 Loss: 3.1991492381095887
Epoch: 10/10 Loss: 3.1269894323349
Epoch: 10/10 Loss: 3.2487295141220094
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** I started with the examples provided, and based the hyperparameters with: [sentiment-rnn](https://github.com/udacity/deep-learning-v2-pytorch/blob/master/sentiment-rnn/Sentiment_RNN_Solution.ipynb). Finally I used the following hyperparameters:- **length of a sequence = 16** I set it 16 as a reasonable size of words in a sentence.- **batch size = 256** Keep it as medium size to compute. - **umber of epochs to train for = 10** However in the final result, after the epoch 7 I didn't get any loss greater than 3.5.- **learning rate for an Adam optimizer** I started with a bigger one like 0.01 (It got me a loss bigger than 3.5) and changed it to 0.001.- **vocab size = len(vocab_to_int)** It is the number of uniqe tokens in our vocabulary.- **output size =vocab size** It is to the desired size of the output, it will be the same size as our vocabulary.- **embedding dimension = 400** It is smaller than the vocab_size, a big embedding will require longer time to train (computational complexity), and a small one could not capture the semantics. I keep it based on "sentiment-rnn". - **hidden dimension of the RNN = 256** I keep it based on "sentiment-rnn".- **number of layers/cells the RNN = 2** Use 3 layers could be more time on training. 2 was enoght to get the expected result (a loss less than 3.5) --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:44: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
chars = tuple(set(text))
int2char = dict(enumerate(chars))
char2int = {ch:ii for ii, ch in int2char.items()}
return (char2int, int2char)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
return {
'.': '||period||',
';': '||semicolon||',
'"': '||quote||',
'!': '||exclamation||',
'?': '||question||',
'(': '||left_par||',
')': '||right_par||',
',': '||comma||',
'-': '||hyphen||',
'\n': '||new_line||'}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
batch_size_total = batch_size*sequence_length
n_batches = len(words)//batch_size_total
words = words[:n_batches*batch_size_total]
x, y = [], []
for n in range(0,len(words)- sequence_length):
x.append(words[n:n+sequence_length])
y.append(words[n+sequence_length])
x_ten = torch.from_numpy(np.array(x))
y_ten = torch.from_numpy(np.array(y))
data = TensorDataset(x_ten, y_ten)
return torch.utils.data.DataLoader(data, shuffle=True, batch_size=batch_size)
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 21, 22, 23, 24, 25],
[ 20, 21, 22, 23, 24],
[ 24, 25, 26, 27, 28],
[ 33, 34, 35, 36, 37],
[ 32, 33, 34, 35, 36],
[ 3, 4, 5, 6, 7],
[ 5, 6, 7, 8, 9],
[ 22, 23, 24, 25, 26],
[ 34, 35, 36, 37, 38],
[ 0, 1, 2, 3, 4]])
torch.Size([10])
tensor([ 26, 25, 29, 38, 37, 8, 10, 27, 39, 5])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
self.vocab_size = vocab_size
self.output_size = output_size
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.n_layers = n_layers
self.dropout = nn.Dropout(0.25)
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout = dropout, batch_first=True)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
batch_size = nn_input.size(0)
nn_input = nn_input.long()
embed = self.embedding(nn_input)
r_output, hidden = self.lstm(embed,hidden)
r_output = r_output.contiguous().view(-1,self.hidden_dim)
output = self.dropout(r_output)
output = self.fc(output)
output = output.view(batch_size, -1, self.output_size)
out = output[:, -1]
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
weight = next(self.parameters()).data
if train_on_gpu:
hidden = (weight.new(self.n_layers,
batch_size,
self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers,
batch_size,
self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers,
batch_size,
self.hidden_dim).zero_(),
weight.new(self.n_layers,
batch_size,
self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
if (train_on_gpu):
inp, target = inp.cuda(), target.cuda()
hidden = tuple([each.data for each in hidden])
rnn.zero_grad()
output, hidden = rnn(inp, hidden)
loss = criterion(output, target)
loss.backward()
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 8 # of words in a sequence
# Batch Size
batch_size = 256
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 20
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 256
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from workspace_utils import active_session
with active_session():
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 20 epoch(s)...
Epoch: 1/20 Loss: 5.472956974983215
Epoch: 1/20 Loss: 4.778247024536133
Epoch: 1/20 Loss: 4.5611457328796385
Epoch: 1/20 Loss: 4.442307324409485
Epoch: 1/20 Loss: 4.37711930513382
Epoch: 1/20 Loss: 4.3269067187309265
Epoch: 2/20 Loss: 4.228359551181345
Epoch: 2/20 Loss: 4.133873756408692
Epoch: 2/20 Loss: 4.120244804382324
Epoch: 2/20 Loss: 4.105042007446289
Epoch: 2/20 Loss: 4.093498089313507
Epoch: 2/20 Loss: 4.064740857124328
Epoch: 3/20 Loss: 4.007802387058309
Epoch: 3/20 Loss: 3.948322859764099
Epoch: 3/20 Loss: 3.9448551321029663
Epoch: 3/20 Loss: 3.9328135671615603
Epoch: 3/20 Loss: 3.922827781200409
Epoch: 3/20 Loss: 3.938111147880554
Epoch: 4/20 Loss: 3.863433227110445
Epoch: 4/20 Loss: 3.8200348558425903
Epoch: 4/20 Loss: 3.8278083066940307
Epoch: 4/20 Loss: 3.8361974730491637
Epoch: 4/20 Loss: 3.819807970523834
Epoch: 4/20 Loss: 3.8445742893218995
Epoch: 5/20 Loss: 3.786066251585262
Epoch: 5/20 Loss: 3.7320729660987855
Epoch: 5/20 Loss: 3.742035280227661
Epoch: 5/20 Loss: 3.7464504222869874
Epoch: 5/20 Loss: 3.755089564323425
Epoch: 5/20 Loss: 3.7483711080551148
Epoch: 6/20 Loss: 3.715162828336332
Epoch: 6/20 Loss: 3.6667286672592163
Epoch: 6/20 Loss: 3.6790166807174685
Epoch: 6/20 Loss: 3.6914181571006774
Epoch: 6/20 Loss: 3.693649087429047
Epoch: 6/20 Loss: 3.697379428386688
Epoch: 7/20 Loss: 3.6426511049514163
Epoch: 7/20 Loss: 3.6315478339195253
Epoch: 7/20 Loss: 3.629756965637207
Epoch: 7/20 Loss: 3.6340033864974974
Epoch: 7/20 Loss: 3.6262159543037416
Epoch: 7/20 Loss: 3.6440613465309144
Epoch: 8/20 Loss: 3.607206752034812
Epoch: 8/20 Loss: 3.5713952860832214
Epoch: 8/20 Loss: 3.5932981367111205
Epoch: 8/20 Loss: 3.587841139793396
Epoch: 8/20 Loss: 3.5928955068588255
Epoch: 8/20 Loss: 3.620823034286499
Epoch: 9/20 Loss: 3.569108201729265
Epoch: 9/20 Loss: 3.525101620197296
Epoch: 9/20 Loss: 3.5365047702789307
Epoch: 9/20 Loss: 3.5620567569732664
Epoch: 9/20 Loss: 3.567617578983307
Epoch: 9/20 Loss: 3.5797277994155885
Epoch: 10/20 Loss: 3.5172681277570246
Epoch: 10/20 Loss: 3.496902190208435
Epoch: 10/20 Loss: 3.516548152446747
Epoch: 10/20 Loss: 3.543080725669861
Epoch: 10/20 Loss: 3.5332755365371704
Epoch: 10/20 Loss: 3.5453475847244262
Epoch: 11/20 Loss: 3.5105559285254473
Epoch: 11/20 Loss: 3.4782037596702575
Epoch: 11/20 Loss: 3.463699842453003
Epoch: 11/20 Loss: 3.50378421831131
Epoch: 11/20 Loss: 3.5058783679008485
Epoch: 11/20 Loss: 3.5173110904693603
Epoch: 12/20 Loss: 3.4706592389341515
Epoch: 12/20 Loss: 3.43311047410965
Epoch: 12/20 Loss: 3.464690296173096
Epoch: 12/20 Loss: 3.4796830520629882
Epoch: 12/20 Loss: 3.4693809962272644
Epoch: 12/20 Loss: 3.491677869796753
Epoch: 13/20 Loss: 3.4579450852780833
Epoch: 13/20 Loss: 3.4266866216659544
Epoch: 13/20 Loss: 3.424931010723114
Epoch: 13/20 Loss: 3.4592221961021425
Epoch: 13/20 Loss: 3.4687773509025575
Epoch: 13/20 Loss: 3.463481078147888
Epoch: 14/20 Loss: 3.433932823350651
Epoch: 14/20 Loss: 3.396617233276367
Epoch: 14/20 Loss: 3.41681413602829
Epoch: 14/20 Loss: 3.434564519405365
Epoch: 14/20 Loss: 3.4537267956733704
Epoch: 14/20 Loss: 3.4544762392044066
Epoch: 15/20 Loss: 3.404739597357574
Epoch: 15/20 Loss: 3.3869185853004455
Epoch: 15/20 Loss: 3.397803156852722
Epoch: 15/20 Loss: 3.396700533390045
Epoch: 15/20 Loss: 3.4292261743545533
Epoch: 15/20 Loss: 3.444297547340393
Epoch: 16/20 Loss: 3.391730428837902
Epoch: 16/20 Loss: 3.360869251728058
Epoch: 16/20 Loss: 3.388628399848938
Epoch: 16/20 Loss: 3.3864743776321413
Epoch: 16/20 Loss: 3.4065000381469726
Epoch: 16/20 Loss: 3.417522574901581
Epoch: 17/20 Loss: 3.3870991328398223
Epoch: 17/20 Loss: 3.354984178543091
Epoch: 17/20 Loss: 3.365598596572876
Epoch: 17/20 Loss: 3.375752248287201
Epoch: 17/20 Loss: 3.392991916179657
Epoch: 17/20 Loss: 3.4017629752159118
Epoch: 18/20 Loss: 3.361126951104652
Epoch: 18/20 Loss: 3.3484280824661257
Epoch: 18/20 Loss: 3.3578254833221437
Epoch: 18/20 Loss: 3.3553472452163695
Epoch: 18/20 Loss: 3.3740733013153075
Epoch: 18/20 Loss: 3.38919611120224
Epoch: 19/20 Loss: 3.3480355240351334
Epoch: 19/20 Loss: 3.3238951878547667
Epoch: 19/20 Loss: 3.3525532855987548
Epoch: 19/20 Loss: 3.3468800201416014
Epoch: 19/20 Loss: 3.3594187479019166
Epoch: 19/20 Loss: 3.371137848854065
Epoch: 20/20 Loss: 3.3330890399321107
Epoch: 20/20 Loss: 3.3095488953590393
Epoch: 20/20 Loss: 3.3195668268203735
Epoch: 20/20 Loss: 3.3446415367126465
Epoch: 20/20 Loss: 3.361153946876526
Epoch: 20/20 Loss: 3.369842571258545
Model Trained and Saved
###Markdown
--- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:38: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
vocab2int = {word:i for i,word in enumerate(set(text))}
int2vocab = {inte:word for inte,word in enumerate(set(text))}
# return tuple
return vocab2int, int2vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return {
'.':'||Period||',
',':'||Comma||',
'"':'||Quotation_Mark||',
';':'||Semicolon||',
'!':'||Exclamation_Mark||',
'?':'||Question_Mark||',
'(':'||Left_Parentheses||',
')':'||Right_Parentheses||',
'-':'||Dash||',
'\n':'||Return||'
}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def get_target(words,idx,window_size=5):
R=np.random.randint(1,window_size+1)
start=idx-R if (idx-R)>0 else 0
stop=idx +R
target_words=words[start:idx]+words[idx+1:stop+1]
return list(target_words)
def batch_data(words, sequence_length, batch_size=5):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# get number of targets we can make
n_targets = len(words) - sequence_length
# initialize feature and target
feature, target = [], []
# loop through all targets we can make
for i in range(n_targets):
x = words[i : i+sequence_length] # get some words from the given list
y = words[i+sequence_length] # get the next word to be the target
feature.append(x)
target.append(y)
feature_tensor, target_tensor = torch.from_numpy(np.array(feature)), torch.from_numpy(np.array(target))
# create data
data = TensorDataset(feature_tensor, target_tensor)
# create dataloader
dataloader = DataLoader(data, batch_size=batch_size, shuffle=True)
# return a dataloader
return dataloader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[22, 23, 24, 25, 26],
[18, 19, 20, 21, 22],
[ 6, 7, 8, 9, 10],
[35, 36, 37, 38, 39],
[ 0, 1, 2, 3, 4],
[20, 21, 22, 23, 24],
[17, 18, 19, 20, 21],
[ 8, 9, 10, 11, 12],
[14, 15, 16, 17, 18],
[42, 43, 44, 45, 46]])
torch.Size([10])
tensor([27, 23, 11, 40, 5, 25, 22, 13, 19, 47])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim,
hidden_dim,
n_layers,
dropout = dropout,
batch_first = True
)
self.dropout = nn.Dropout()
self.fc = nn.Linear(hidden_dim, output_size)
#self.sig = nn.Sigmoid() ###sigmoid not used in generative models??
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
#out = self.dropout(lstm_out)
out = self.fc(lstm_out)
out = out.view(batch_size, -1, self.output_size)
out = out[:, -1]
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
weight = next(self.parameters()).data
# initialize hidden state with zero weights, and move to GPU if available
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if(train_on_gpu):
rnn.cuda()
if(train_on_gpu):
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
#### Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
hidden = tuple([each.data for each in hidden])
rnn.zero_grad()
output, hidden = rnn(inp, hidden)
# calculate the loss and perform backprop
loss = criterion(output, target)
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 1000
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 5.246990999221802
Epoch: 1/10 Loss: 4.657585704088211
Epoch: 1/10 Loss: 4.45005393075943
Epoch: 1/10 Loss: 4.333704968452453
Epoch: 1/10 Loss: 4.263662850379943
Epoch: 1/10 Loss: 4.229911610841751
Epoch: 2/10 Loss: 4.082900234985739
Epoch: 2/10 Loss: 3.9861891577243806
Epoch: 2/10 Loss: 3.9701928913593294
Epoch: 2/10 Loss: 3.9595058488845827
Epoch: 2/10 Loss: 3.9494157614707945
Epoch: 2/10 Loss: 3.9485573186874388
Epoch: 3/10 Loss: 3.8470129720931006
Epoch: 3/10 Loss: 3.7845678362846376
Epoch: 3/10 Loss: 3.7661884078979493
Epoch: 3/10 Loss: 3.7892973890304567
Epoch: 3/10 Loss: 3.769862293958664
Epoch: 3/10 Loss: 3.780990803003311
Epoch: 4/10 Loss: 3.7146878542786266
Epoch: 4/10 Loss: 3.6415553512573244
Epoch: 4/10 Loss: 3.6499096276760103
Epoch: 4/10 Loss: 3.6678592205047607
Epoch: 4/10 Loss: 3.654672094106674
Epoch: 4/10 Loss: 3.6796951491832735
Epoch: 5/10 Loss: 3.611517976176745
Epoch: 5/10 Loss: 3.5514617977142335
Epoch: 5/10 Loss: 3.5535734639167784
Epoch: 5/10 Loss: 3.571285780906677
Epoch: 5/10 Loss: 3.578174908876419
Epoch: 5/10 Loss: 3.6068078253269196
Epoch: 6/10 Loss: 3.533201336315769
Epoch: 6/10 Loss: 3.4701901757717133
Epoch: 6/10 Loss: 3.4914867269992826
Epoch: 6/10 Loss: 3.494761492729187
Epoch: 6/10 Loss: 3.5154988026618956
Epoch: 6/10 Loss: 3.5346083900928496
Epoch: 7/10 Loss: 3.4750923048487286
Epoch: 7/10 Loss: 3.420525914669037
Epoch: 7/10 Loss: 3.4341518335342407
Epoch: 7/10 Loss: 3.4557393221855164
Epoch: 7/10 Loss: 3.4647683248519896
Epoch: 7/10 Loss: 3.479178802013397
Epoch: 8/10 Loss: 3.410317848514578
Epoch: 8/10 Loss: 3.3603113305568697
Epoch: 8/10 Loss: 3.386174315929413
Epoch: 8/10 Loss: 3.404626386880875
Epoch: 8/10 Loss: 3.4300795192718505
Epoch: 8/10 Loss: 3.4444572412967682
Epoch: 9/10 Loss: 3.3722622063996646
Epoch: 9/10 Loss: 3.3373029968738557
Epoch: 9/10 Loss: 3.3332952313423156
Epoch: 9/10 Loss: 3.3568105165958406
Epoch: 9/10 Loss: 3.3780274176597596
Epoch: 9/10 Loss: 3.398186367034912
Epoch: 10/10 Loss: 3.347148909915093
Epoch: 10/10 Loss: 3.291948429822922
Epoch: 10/10 Loss: 3.310691568374634
Epoch: 10/10 Loss: 3.3340189960002897
Epoch: 10/10 Loss: 3.354156327962875
Epoch: 10/10 Loss: 3.3540609579086302
Model Trained and Saved
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** I tried larger sequence lengths like 15 or 20, but it seems the grad decreased slower, with several tests 10 is a good value. Hidden_dims is set as 256 seems good as example shown in course videos. good Loss with 3.29 is obtained as shown on Epoch 10/10(2). --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
jerry: suitcases suitcases nyu meets the cabin.
jerry: oh, you know what this is?
jerry: i was in the apartment last week, but i was just curious, i can't get out of my apartment, and i was gonna go out with you, i don't know why i was just trying to tell her i was a little disappointed, and i was gonna get a little nervous, i don't know.
jerry: well knocks up there?
kramer: oh, yeah, yeah.(george enters.) oh, hi, hi.
george:(looking at jerry) i know, you don't have to get it out of the car.
frank:(still holding his hand to the kitchen)
kramer:(to jerry) you want you to go?
george: i don't know. i was just thinking... you know, i don't know.
elaine: i think it should be a very nice place to be a little bit to do this.
jerry:(confused) what are you gonna do? i got a little bit to see you and jerry.
jerry: what?
jerry: i was just curious.
elaine: well, i guess, i was in love with the specials.
jerry: i know, i don't think so.
kramer:(to himself) i can't believe this.(jerry and george enter)
george: i think you know.
elaine: well, i don't think so.
jerry: you know, you don't know. i mean......
kramer: well, you know, it's like a long time.
elaine: oh, you don't know why you think you're not gonna have any time to get in the bathroom? you know what?
elaine:(looking at his watch) i can't believe what i think.
kramer: oh, well, you got a great time.
jerry:(to jerry) you see, i don't think so
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
from string import punctuation
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
unique_words = set(text) #unique_words in text
counts = Counter(unique_words) #count of unique words
vocabulary = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocabulary, 0)} #words with corresponding int tokens
int_to_vocab = {ii: ch for ch, ii in vocab_to_int.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
from string import punctuation
print(punctuation)
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
tokens = dict()
tokens['.'] = '<PERIOD>'
tokens[','] = '<COMMA>'
tokens['"'] = '<QUOTATION_MARK>'
tokens[';'] = '<SEMICOLON>'
tokens['!'] = '<EXCLAMATION_MARK>'
tokens['?'] = '<QUESTION_MARK>'
tokens['('] = '<LEFT_PAREN>'
tokens[')'] = '<RIGHT_PAREN>'
tokens['?'] = '<QUESTION_MARK>'
tokens['-'] = '<DASH>'
tokens['\n'] = '<NEW_LINE>'
return tokens
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# words = words.reshape((batch_size, -1))
target_length = len(words) - sequence_length
x, y = [], []
for n in range(target_length):
x.append(words[n:n+sequence_length]) #make sequence_len window
y.append(words[n+sequence_length])
data = TensorDataset(torch.from_numpy(np.asarray(x)), torch.from_numpy(np.asarray(y)))
data_loader = DataLoader(data, shuffle=False, batch_size=batch_size)
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]])
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5,lr=0.001):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# define embedding layer
self.embedding = nn.Embedding(vocab_size, embedding_dim)
## Define the LSTM
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# Define the final, fully-connected output layer
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
batch_size = nn_input.size(0)
# embeddings and lstm_out
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
out = self.fc(lstm_out)
# reshape into (batch_size, seq_length, output_size)
out = out.view(batch_size, -1, self.output_size)
# get last batch
out = out[:, -1]
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Create two new tensors with sizes n_layers x batch_size x hidden_dim,
# initialized to zero, for hidden state and cell state of LSTM
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# move model to GPU, if available
if(train_on_gpu):
rnn.cuda()
# # Creating new variables for the hidden state, otherwise
# # we'd backprop through the entire training history
h = tuple([each.data for each in hidden])
# zero accumulated gradients
rnn.zero_grad()
if(train_on_gpu):
inputs, target = inp.cuda(), target.cuda()
# print(h[0].data)
#inputs, target = inp, target
# get predicted outputs
output, h = rnn(inputs, h)
# calculate loss
loss = criterion(output, target)
# optimizer.zero_grad()
loss.backward()
# 'clip_grad_norm' helps prevent the exploding gradient problem in RNNs / LSTMs
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
return loss.item(), h
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
len(int_text)
# Data params
sequence_length = 10 # of words in a sequence
batch_size = 64
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
num_epochs = 10
learning_rate = 0.001
vocab_size = len(vocab_to_int)
output_size = vocab_size
embedding_dim = 200
hidden_dim = 250
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./trained_rnn.pt', trained_rnn)
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 5.626285982608795
Epoch: 1/10 Loss: 5.042195623397827
Epoch: 1/10 Loss: 4.927733078002929
Epoch: 1/10 Loss: 4.748253393173218
Epoch: 1/10 Loss: 4.666325830459595
Epoch: 1/10 Loss: 4.509279230594635
Epoch: 1/10 Loss: 4.4452972702980045
Epoch: 1/10 Loss: 4.491038519382477
Epoch: 1/10 Loss: 4.563551615715027
Epoch: 1/10 Loss: 4.404510603427887
Epoch: 1/10 Loss: 4.495689366340637
Epoch: 1/10 Loss: 4.536718783378601
Epoch: 1/10 Loss: 4.457179727554322
Epoch: 1/10 Loss: 4.391322546482086
Epoch: 1/10 Loss: 4.402498832702637
Epoch: 1/10 Loss: 4.210037431240082
Epoch: 1/10 Loss: 4.251832055568695
Epoch: 1/10 Loss: 4.321867434978485
Epoch: 1/10 Loss: 4.271789047718048
Epoch: 1/10 Loss: 4.208213728904724
Epoch: 1/10 Loss: 4.330883482933045
Epoch: 1/10 Loss: 4.412975255966186
Epoch: 1/10 Loss: 4.4305776352882384
Epoch: 1/10 Loss: 4.380789225578308
Epoch: 1/10 Loss: 4.386799168109894
Epoch: 1/10 Loss: 4.421807224273682
Epoch: 1/10 Loss: 4.33577938747406
Epoch: 2/10 Loss: 4.222767859856034
Epoch: 2/10 Loss: 4.038570663452148
Epoch: 2/10 Loss: 4.059839665412903
Epoch: 2/10 Loss: 4.016582417964935
Epoch: 2/10 Loss: 3.9989163880348206
Epoch: 2/10 Loss: 3.9109033637046813
Epoch: 2/10 Loss: 3.8846182470321655
Epoch: 2/10 Loss: 3.9408162150382995
Epoch: 2/10 Loss: 4.054593029022217
Epoch: 2/10 Loss: 3.9398419728279115
Epoch: 2/10 Loss: 4.035030739784241
Epoch: 2/10 Loss: 4.10122740983963
Epoch: 2/10 Loss: 4.029494116783142
Epoch: 2/10 Loss: 4.001172816753387
Epoch: 2/10 Loss: 3.984822296619415
Epoch: 2/10 Loss: 3.8411126065254213
Epoch: 2/10 Loss: 3.922910849571228
Epoch: 2/10 Loss: 3.958962037563324
Epoch: 2/10 Loss: 3.9063660702705385
Epoch: 2/10 Loss: 3.8479747009277343
Epoch: 2/10 Loss: 4.016427094936371
Epoch: 2/10 Loss: 4.076266962051392
Epoch: 2/10 Loss: 4.098210009098053
Epoch: 2/10 Loss: 4.033592838287354
Epoch: 2/10 Loss: 4.059541031837464
Epoch: 2/10 Loss: 4.063958287239075
Epoch: 2/10 Loss: 4.0333293137550355
Epoch: 3/10 Loss: 3.9757565113937003
Epoch: 3/10 Loss: 3.8758427290916444
Epoch: 3/10 Loss: 3.88838671875
Epoch: 3/10 Loss: 3.824080547571182
Epoch: 3/10 Loss: 3.792986105442047
Epoch: 3/10 Loss: 3.7399016814231874
Epoch: 3/10 Loss: 3.704944464683533
Epoch: 3/10 Loss: 3.7654943051338194
Epoch: 3/10 Loss: 3.872011314868927
Epoch: 3/10 Loss: 3.7711556096076966
Epoch: 3/10 Loss: 3.899056243896484
Epoch: 3/10 Loss: 3.9277945923805238
Epoch: 3/10 Loss: 3.855765347480774
Epoch: 3/10 Loss: 3.827311481952667
Epoch: 3/10 Loss: 3.8326210894584656
Epoch: 3/10 Loss: 3.7160993828773496
Epoch: 3/10 Loss: 3.765183704376221
Epoch: 3/10 Loss: 3.7670102190971373
Epoch: 3/10 Loss: 3.742635353088379
Epoch: 3/10 Loss: 3.723188822746277
Epoch: 3/10 Loss: 3.8859115767478944
Epoch: 3/10 Loss: 3.938003800868988
Epoch: 3/10 Loss: 3.9630267868041993
Epoch: 3/10 Loss: 3.9452022280693053
Epoch: 3/10 Loss: 3.9051348237991332
Epoch: 3/10 Loss: 3.906770224094391
Epoch: 3/10 Loss: 3.872326648235321
Epoch: 4/10 Loss: 3.8213996722167547
Epoch: 4/10 Loss: 3.754813754558563
Epoch: 4/10 Loss: 3.778704234600067
Epoch: 4/10 Loss: 3.7121593329906464
Epoch: 4/10 Loss: 3.6924094767570494
Epoch: 4/10 Loss: 3.615902979373932
Epoch: 4/10 Loss: 3.5962221393585203
Epoch: 4/10 Loss: 3.655794701576233
Epoch: 4/10 Loss: 3.752991834640503
Epoch: 4/10 Loss: 3.650555018424988
Epoch: 4/10 Loss: 3.770229367733002
Epoch: 4/10 Loss: 3.8105051136016845
Epoch: 4/10 Loss: 3.757607269287109
Epoch: 4/10 Loss: 3.7146813192367554
Epoch: 4/10 Loss: 3.6954393348693846
Epoch: 4/10 Loss: 3.609909596443176
Epoch: 4/10 Loss: 3.645623220205307
Epoch: 4/10 Loss: 3.6706583395004273
Epoch: 4/10 Loss: 3.6577448968887327
Epoch: 4/10 Loss: 3.617503483772278
Epoch: 4/10 Loss: 3.7930727858543394
Epoch: 4/10 Loss: 3.831050326347351
Epoch: 4/10 Loss: 3.865900510787964
Epoch: 4/10 Loss: 3.8364789719581602
Epoch: 4/10 Loss: 3.802232730388641
Epoch: 4/10 Loss: 3.8070947279930114
Epoch: 4/10 Loss: 3.7541960186958314
Epoch: 5/10 Loss: 3.7227334587718732
Epoch: 5/10 Loss: 3.6703632040023804
Epoch: 5/10 Loss: 3.7090781960487367
Epoch: 5/10 Loss: 3.6334264216423033
Epoch: 5/10 Loss: 3.5758563318252565
Epoch: 5/10 Loss: 3.5298464245796204
Epoch: 5/10 Loss: 3.5130391240119936
Epoch: 5/10 Loss: 3.568640021800995
Epoch: 5/10 Loss: 3.673930060386658
Epoch: 5/10 Loss: 3.5683663630485536
Epoch: 5/10 Loss: 3.6836147742271423
Epoch: 5/10 Loss: 3.7173752884864806
Epoch: 5/10 Loss: 3.693704773902893
Epoch: 5/10 Loss: 3.639796751022339
Epoch: 5/10 Loss: 3.6266501746177675
Epoch: 5/10 Loss: 3.537646650791168
Epoch: 5/10 Loss: 3.5568584225177764
Epoch: 5/10 Loss: 3.594251932144165
Epoch: 5/10 Loss: 3.593085802555084
Epoch: 5/10 Loss: 3.5424518637657165
Epoch: 5/10 Loss: 3.715118019104004
Epoch: 5/10 Loss: 3.7201204266548156
Epoch: 5/10 Loss: 3.781272727012634
Epoch: 5/10 Loss: 3.749784249782562
Epoch: 5/10 Loss: 3.719280736923218
Epoch: 5/10 Loss: 3.727500115394592
Epoch: 5/10 Loss: 3.682547796726227
Epoch: 6/10 Loss: 3.6491394352735167
Epoch: 6/10 Loss: 3.598791978597641
Epoch: 6/10 Loss: 3.6471619458198545
Epoch: 6/10 Loss: 3.5789610514640806
Epoch: 6/10 Loss: 3.521570070743561
Epoch: 6/10 Loss: 3.4735229415893554
Epoch: 6/10 Loss: 3.4610399765968323
Epoch: 6/10 Loss: 3.506699200630188
Epoch: 6/10 Loss: 3.615067078113556
Epoch: 6/10 Loss: 3.502860247135162
Epoch: 6/10 Loss: 3.614334485054016
Epoch: 6/10 Loss: 3.6802509765625
Epoch: 6/10 Loss: 3.6467618050575257
Epoch: 6/10 Loss: 3.5814969363212588
Epoch: 6/10 Loss: 3.559717480182648
Epoch: 6/10 Loss: 3.470000730037689
Epoch: 6/10 Loss: 3.4977754936218264
Epoch: 6/10 Loss: 3.542849807262421
Epoch: 6/10 Loss: 3.5385200643539427
Epoch: 6/10 Loss: 3.4741253933906555
Epoch: 6/10 Loss: 3.6507114667892457
Epoch: 6/10 Loss: 3.6616494665145876
Epoch: 6/10 Loss: 3.7155288076400756
Epoch: 6/10 Loss: 3.6751344275474547
Epoch: 6/10 Loss: 3.6744229364395142
Epoch: 6/10 Loss: 3.6735162725448607
Epoch: 6/10 Loss: 3.641093776702881
Epoch: 7/10 Loss: 3.592488924154458
Epoch: 7/10 Loss: 3.555366448879242
Epoch: 7/10 Loss: 3.5979912238121035
Epoch: 7/10 Loss: 3.550986848831177
Epoch: 7/10 Loss: 3.47798468542099
Epoch: 7/10 Loss: 3.4244713988304136
Epoch: 7/10 Loss: 3.414913432121277
Epoch: 7/10 Loss: 3.4759127793312072
Epoch: 7/10 Loss: 3.555940191745758
Epoch: 7/10 Loss: 3.4368428192138674
Epoch: 7/10 Loss: 3.575292818069458
Epoch: 7/10 Loss: 3.616502721309662
Epoch: 7/10 Loss: 3.597757300376892
Epoch: 7/10 Loss: 3.542052065372467
Epoch: 7/10 Loss: 3.501235333442688
Epoch: 7/10 Loss: 3.414286627292633
Epoch: 7/10 Loss: 3.436218584537506
Epoch: 7/10 Loss: 3.495600422382355
Epoch: 7/10 Loss: 3.4933184356689453
Epoch: 7/10 Loss: 3.424741901397705
Epoch: 7/10 Loss: 3.598416178703308
Epoch: 7/10 Loss: 3.599672047138214
Epoch: 7/10 Loss: 3.6698138718605042
Epoch: 7/10 Loss: 3.61639994764328
Epoch: 7/10 Loss: 3.6205502710342405
Epoch: 7/10 Loss: 3.622935773849487
Epoch: 7/10 Loss: 3.591764862537384
Epoch: 8/10 Loss: 3.5497302150827883
Epoch: 8/10 Loss: 3.507222337245941
Epoch: 8/10 Loss: 3.562634430885315
Epoch: 8/10 Loss: 3.494962973117828
Epoch: 8/10 Loss: 3.436317138195038
Epoch: 8/10 Loss: 3.391021825313568
Epoch: 8/10 Loss: 3.372903301715851
Epoch: 8/10 Loss: 3.443020212650299
Epoch: 8/10 Loss: 3.5105148310661316
Epoch: 8/10 Loss: 3.4045159673690795
Epoch: 8/10 Loss: 3.5337830901145937
Epoch: 8/10 Loss: 3.5693638768196108
Epoch: 8/10 Loss: 3.5510009489059446
Epoch: 8/10 Loss: 3.499940137863159
Epoch: 8/10 Loss: 3.460319149494171
Epoch: 8/10 Loss: 3.382239278793335
Epoch: 8/10 Loss: 3.3949890727996825
Epoch: 8/10 Loss: 3.4457347383499144
Epoch: 8/10 Loss: 3.460291368961334
Epoch: 8/10 Loss: 3.3889461963176726
Epoch: 8/10 Loss: 3.5622550172805787
Epoch: 8/10 Loss: 3.5656433238983154
Epoch: 8/10 Loss: 3.6259664478302
Epoch: 8/10 Loss: 3.585397023677826
Epoch: 8/10 Loss: 3.5795237004756926
Epoch: 8/10 Loss: 3.5828110337257386
Epoch: 8/10 Loss: 3.5596083903312685
Epoch: 9/10 Loss: 3.507722315189049
Epoch: 9/10 Loss: 3.4664843771457674
Epoch: 9/10 Loss: 3.5297602620124815
Epoch: 9/10 Loss: 3.45001762008667
Epoch: 9/10 Loss: 3.402861466884613
Epoch: 9/10 Loss: 3.35923495388031
Epoch: 9/10 Loss: 3.3372762861251832
Epoch: 9/10 Loss: 3.4206077075004577
Epoch: 9/10 Loss: 3.4675138511657715
Epoch: 9/10 Loss: 3.357544892311096
Epoch: 9/10 Loss: 3.4998889636993407
Epoch: 9/10 Loss: 3.5455175223350524
Epoch: 9/10 Loss: 3.52463206577301
Epoch: 9/10 Loss: 3.463266794681549
Epoch: 9/10 Loss: 3.42004478931427
Epoch: 9/10 Loss: 3.354629065036774
Epoch: 9/10 Loss: 3.360728132247925
Epoch: 9/10 Loss: 3.4150163197517394
Epoch: 9/10 Loss: 3.428225482940674
Epoch: 9/10 Loss: 3.3671119146347044
Epoch: 9/10 Loss: 3.51234423828125
Epoch: 9/10 Loss: 3.530109088420868
Epoch: 9/10 Loss: 3.5903311371803284
Epoch: 9/10 Loss: 3.5549391975402833
Epoch: 9/10 Loss: 3.5448558604717255
Epoch: 9/10 Loss: 3.5492285294532775
Epoch: 9/10 Loss: 3.5203405690193175
Epoch: 10/10 Loss: 3.4694322308285424
Epoch: 10/10 Loss: 3.4350241575241087
Epoch: 10/10 Loss: 3.4964673109054565
Epoch: 10/10 Loss: 3.4268339445590974
Epoch: 10/10 Loss: 3.374123175621033
Epoch: 10/10 Loss: 3.342466145992279
Epoch: 10/10 Loss: 3.3205345120429994
Epoch: 10/10 Loss: 3.385893027305603
Epoch: 10/10 Loss: 3.4253044290542602
Epoch: 10/10 Loss: 3.331146954536438
Epoch: 10/10 Loss: 3.465665825843811
Epoch: 10/10 Loss: 3.516086208343506
Epoch: 10/10 Loss: 3.496067138671875
Epoch: 10/10 Loss: 3.4270591368675234
Epoch: 10/10 Loss: 3.3953236956596373
Epoch: 10/10 Loss: 3.3374750680923464
Epoch: 10/10 Loss: 3.3419807810783384
Epoch: 10/10 Loss: 3.3874538865089416
Epoch: 10/10 Loss: 3.4012483048439024
Epoch: 10/10 Loss: 3.3396986713409422
Epoch: 10/10 Loss: 3.47789173078537
Epoch: 10/10 Loss: 3.5020601248741148
Epoch: 10/10 Loss: 3.564427523136139
Epoch: 10/10 Loss: 3.5341336789131166
Epoch: 10/10 Loss: 3.505482707500458
Epoch: 10/10 Loss: 3.5157456007003782
Epoch: 10/10 Loss: 3.488763710975647
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** sequence_length : sequence length is chosen 10 because for the larger sequence the gradient computations may lead to gradients nearing to zero.batch_size : batch size is chosen 64 to maximize the parallel operations on GPU. The number of examples are more. Hence, batch_size is larger. Also to reach faster convergence during training, a larger batch size is taken.num_epochs : Initially 10 epochs were considered. As the decrease in loss was more evident after observing loss at every epoch, number of epochs were chosen less.learning_rate : Chose a learning rate of 0.001. There was even decrease in loss with this learning rate.embedding_dim : Depending on the vocabulary size which is approx 43k, embedding dimension of 200 was chosen to reduce dimensionality. A large dimension was not chosen because reducing the dimension was the major objective than establishing semantic and syntactic relationships between word embeddings.hidden_dim : Hidden dim of 250 was chosen in order to extract reasonable number of features from words.n_layers : Assumed that the proble was not complex enough to take multiple layers, I experimented with 1 to 3 layers and estimated that 2 layes would suffice. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
ls
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:43: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
print(f"number of words: {len(text)}")
counts = Counter(text)
vocab = sorted(counts, key=counts.get, reverse=True)
print(vocab[0:20])
vocab_to_int = {word: ii for ii, word in enumerate(vocab)}
int_to_vocab = {ii: word for ii, word in enumerate(vocab)}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
number of words: 104
['moe_szyslak', 'mike', 'rotch', 'you', 'your', 'to', 'drink', 'the', 'yeah', 'name', 'on', 'hey', 'one', "i'm", 'gonna', 'my', 'homer', 'not', 'problems', 'should']
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
punc_dict = {
".": "||Period||",
",": "||Comma||",
'"': "||Quotation_Mark||",
";": "||Semicolon||",
"!": "||Exclamation_mark||",
"?": "||Question_mark||",
"(": "||Left_Parentheses||",
")": "||Right_Parentheses||",
"-": "||Dash||",
"\n": "||Return||",
}
return punc_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
number of words: 892111
['||return||', '||period||', '||comma||', '||question_mark||', 'you', 'i', 'the', 'jerry:', 'to', 'a', '||exclamation_mark||', '||left_parentheses||', '||right_parentheses||', 'george:', 'elaine:', 'it', 'kramer:', 'and', 'what', 'that']
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
print(int_text[0:20])
###Output
[24, 22, 47, 1, 1, 1, 17, 47, 22, 82, 20, 6, 1252, 545, 8782, 7189, 20, 241, 1, 149]
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
import numpy as np
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
n_words = len(words)
features = []
targets = []
for i in range(n_words - sequence_length):
features.append(words[i:(i+sequence_length)])
targets.append(words[i+sequence_length])
feature_tensors = torch.from_numpy(np.array(features))
target_tensors = torch.from_numpy(np.array(targets))
data = TensorDataset(feature_tensors, target_tensors)
data_loader = torch.utils.data.DataLoader(data, shuffle=True, batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 44, 45, 46, 47, 48],
[ 29, 30, 31, 32, 33],
[ 19, 20, 21, 22, 23],
[ 3, 4, 5, 6, 7],
[ 43, 44, 45, 46, 47],
[ 18, 19, 20, 21, 22],
[ 32, 33, 34, 35, 36],
[ 7, 8, 9, 10, 11],
[ 26, 27, 28, 29, 30],
[ 40, 41, 42, 43, 44]])
torch.Size([10])
tensor([ 49, 34, 24, 8, 48, 23, 37, 12, 31, 45])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define model layers
# embedding and LSTM layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,
dropout=dropout, batch_first=True)
# linear and sigmoid layers
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
# embeddings and lstm_out
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
output = self.fc(lstm_out)
# reshape into (batch_size, seq_length, output_size)
output = output.view(batch_size, -1, self.output_size)
# get last batch
out = output[:, -1]
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if(train_on_gpu):
inp, target = inp.cuda(), target.cuda()
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
hidden = tuple([each.data for each in hidden])
# perform backpropagation and optimization
# zero accumulated gradients
rnn.zero_grad()
# get the output from the model
output, hidden = rnn(inp, hidden)
# calculate the loss and perform backprop
loss = criterion(output.squeeze(), target)
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
clip = 5 # gradient clipping
nn.utils.clip_grad_norm_(rnn.parameters(), clip)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Batch: {:>6}/{:<6} Loss: {}\n'.format(
epoch_i, n_epochs, batch_i, n_batches, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 256
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
print(f"n epoches: {len(train_loader.dataset)//batch_size}")
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 512
# Hidden Dimension
hidden_dim = 512
# Number of RNN Layers
n_layers = 3
# Show stats for every n number of batches
show_every_n_batches = 500
import signal
from contextlib import contextmanager
import requests
DELAY = INTERVAL = 4 * 60 # interval time in seconds
MIN_DELAY = MIN_INTERVAL = 2 * 60
KEEPALIVE_URL = "https://nebula.udacity.com/api/v1/remote/keep-alive"
TOKEN_URL = "http://metadata.google.internal/computeMetadata/v1/instance/attributes/keep_alive_token"
TOKEN_HEADERS = {"Metadata-Flavor":"Google"}
def _request_handler(headers):
def _handler(signum, frame):
requests.request("POST", KEEPALIVE_URL, headers=headers)
return _handler
@contextmanager
def active_session(delay=DELAY, interval=INTERVAL):
"""
Example:
from workspace_utils import active session
with active_session():
# do long-running work here
"""
token = requests.request("GET", TOKEN_URL, headers=TOKEN_HEADERS).text
headers = {'Authorization': "STAR " + token}
delay = max(delay, MIN_DELAY)
interval = max(interval, MIN_INTERVAL)
original_handler = signal.getsignal(signal.SIGALRM)
try:
signal.signal(signal.SIGALRM, _request_handler(headers))
signal.setitimer(signal.ITIMER_REAL, delay, interval)
yield
finally:
signal.signal(signal.SIGALRM, original_handler)
signal.setitimer(signal.ITIMER_REAL, 0)
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
with active_session():
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Batch: 500/3484 Loss: 5.663360884666443
Epoch: 1/10 Batch: 1000/3484 Loss: 4.831448855400086
Epoch: 1/10 Batch: 1500/3484 Loss: 4.542231200695038
Epoch: 1/10 Batch: 2000/3484 Loss: 4.4025792922973634
Epoch: 1/10 Batch: 2500/3484 Loss: 4.33169855928421
Epoch: 1/10 Batch: 3000/3484 Loss: 4.259434262752533
Epoch: 2/10 Batch: 500/3484 Loss: 4.139444394324853
Epoch: 2/10 Batch: 1000/3484 Loss: 4.041079972743988
Epoch: 2/10 Batch: 1500/3484 Loss: 4.017776796817779
Epoch: 2/10 Batch: 2000/3484 Loss: 4.002577679634094
Epoch: 2/10 Batch: 2500/3484 Loss: 3.9782147274017334
Epoch: 2/10 Batch: 3000/3484 Loss: 3.974166464805603
Epoch: 3/10 Batch: 500/3484 Loss: 3.8789712634028457
Epoch: 3/10 Batch: 1000/3484 Loss: 3.8081966495513915
Epoch: 3/10 Batch: 1500/3484 Loss: 3.800468334197998
Epoch: 3/10 Batch: 2000/3484 Loss: 3.780108416557312
Epoch: 3/10 Batch: 2500/3484 Loss: 3.781549256324768
Epoch: 3/10 Batch: 3000/3484 Loss: 3.777507827758789
Epoch: 4/10 Batch: 500/3484 Loss: 3.7015285409562955
Epoch: 4/10 Batch: 1000/3484 Loss: 3.6166465950012205
Epoch: 4/10 Batch: 1500/3484 Loss: 3.635990705490112
Epoch: 4/10 Batch: 2000/3484 Loss: 3.636909219264984
Epoch: 4/10 Batch: 2500/3484 Loss: 3.6534952993392946
Epoch: 4/10 Batch: 3000/3484 Loss: 3.6380959582328796
Epoch: 5/10 Batch: 500/3484 Loss: 3.5673577167638917
Epoch: 5/10 Batch: 1000/3484 Loss: 3.5031520676612855
Epoch: 5/10 Batch: 1500/3484 Loss: 3.5147171349525452
Epoch: 5/10 Batch: 2000/3484 Loss: 3.527718838214874
Epoch: 5/10 Batch: 2500/3484 Loss: 3.505029433250427
Epoch: 5/10 Batch: 3000/3484 Loss: 3.514076425552368
Epoch: 6/10 Batch: 500/3484 Loss: 3.454579778318483
Epoch: 6/10 Batch: 1000/3484 Loss: 3.3850678353309633
Epoch: 6/10 Batch: 1500/3484 Loss: 3.392079020023346
Epoch: 6/10 Batch: 2000/3484 Loss: 3.413806875228882
Epoch: 6/10 Batch: 2500/3484 Loss: 3.417744038105011
Epoch: 6/10 Batch: 3000/3484 Loss: 3.434941940784454
Epoch: 7/10 Batch: 500/3484 Loss: 3.3532950778802237
Epoch: 7/10 Batch: 1000/3484 Loss: 3.291990752220154
Epoch: 7/10 Batch: 1500/3484 Loss: 3.3149483485221864
Epoch: 7/10 Batch: 2000/3484 Loss: 3.331819426059723
Epoch: 7/10 Batch: 2500/3484 Loss: 3.3365014786720275
Epoch: 7/10 Batch: 3000/3484 Loss: 3.345066442012787
Epoch: 8/10 Batch: 500/3484 Loss: 3.276506210245737
Epoch: 8/10 Batch: 1000/3484 Loss: 3.2259902830123903
Epoch: 8/10 Batch: 1500/3484 Loss: 3.2279099283218384
Epoch: 8/10 Batch: 2000/3484 Loss: 3.2522306418418885
Epoch: 8/10 Batch: 2500/3484 Loss: 3.2650704588890074
Epoch: 8/10 Batch: 3000/3484 Loss: 3.275428211212158
Epoch: 9/10 Batch: 500/3484 Loss: 3.211241220071064
Epoch: 9/10 Batch: 1000/3484 Loss: 3.161178931713104
Epoch: 9/10 Batch: 1500/3484 Loss: 3.180769609451294
Epoch: 9/10 Batch: 2000/3484 Loss: 3.1862905921936036
Epoch: 9/10 Batch: 2500/3484 Loss: 3.2103903799057005
Epoch: 9/10 Batch: 3000/3484 Loss: 3.215808619976044
Epoch: 10/10 Batch: 500/3484 Loss: 3.1461442631434617
Epoch: 10/10 Batch: 1000/3484 Loss: 3.1129529523849486
Epoch: 10/10 Batch: 1500/3484 Loss: 3.1034449858665467
Epoch: 10/10 Batch: 2000/3484 Loss: 3.132664776325226
Epoch: 10/10 Batch: 2500/3484 Loss: 3.1426220808029175
Epoch: 10/10 Batch: 3000/3484 Loss: 3.165808174133301
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) I tried two different sequence_lengths: 50 and 10. sequence_length of 10 made the model converge faster, and sequence_length of 50 took very long to run. hidden_dim is the number of units in the hidden layers of our LSTM cells. Usually larger is better performance wise, but the network is larger and trains slower. Common values are 128, 256, 512, etc. I selected a larger value of 512. n_layers is the number of LSTM layers in the network. Typically between 1-3. I selected a larger value of 3. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:43: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_counts = Counter(text)
# sorting the words from most to least frequent in text occurrence
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
# create int_to_vocab dictionaries
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
# return tuple
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
token = {'.': '||PERIOD||',
',': '||COMMA||',
'"': '||QUOTATION_MARK||',
';': '||SEMICOLON||',
'!': '||EXCLAMATION_MARK||',
'?': '||QUESTION_MARK||',
'(': '||LEFT_PAREN||',
')': '||RIGHT_PAREN||',
'-': '||DASH||',
'\n': '<NEW_LINE>'}
return token
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
features = []
target = []
for i_begin in range(0, len(words) - sequence_length):
i_end = i_begin + sequence_length
features.append(words[i_begin:i_end])
target.append(words[i_end])
torch_features = torch.from_numpy(np.asarray(features))
torch_target = torch.from_numpy(np.asarray(target))
data = TensorDataset(torch_features, torch_target)
data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]], dtype=torch.int32)
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], dtype=torch.int32)
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.n_hidden = hidden_dim
# define model layers
self.embed = nn.Embedding(vocab_size, embedding_dim)
#print(self.embed)
self.lstm = nn.LSTM(embedding_dim, self.n_hidden, self.n_layers, dropout=dropout, batch_first=True)
#print(self.lstm)
self.fc = nn.Linear(self.n_hidden, self.output_size)
print(self)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
# Solve Win10 issue
input = torch.tensor(nn_input.detach()).to(torch.int64)
input = self.embed(input)
output, hidden = self.lstm(input, hidden)
output = output.contiguous().view(-1, self.n_hidden)
output = self.fc(output)
output = output.view(batch_size, -1, self.output_size)
out = output[:, -1]
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
weight = next(self.parameters()).data
# initialize hidden state with zero weights, and move to GPU if available
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.n_hidden).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.n_hidden).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.n_hidden).zero_(),
weight.new(self.n_layers, batch_size, self.n_hidden).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
RNN(
(embed): Embedding(20, 15)
(lstm): LSTM(15, 10, num_layers=2, batch_first=True, dropout=0.5)
(fc): Linear(in_features=10, out_features=20, bias=True)
)
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inputs, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
clip = 5
# move data to GPU, if available
target = target.long()
if(train_on_gpu):
rnn.cuda()
inputs, target = inputs.cuda(), target.cuda()
# perform backpropagation and optimization
h = tuple([each.data for each in hidden])
rnn.zero_grad()
output, h = rnn(inputs, h)
# calculate the loss and perform backprop
loss = criterion(output, target)
loss.backward()
nn.utils.clip_grad_norm_(rnn.parameters(), clip)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), h
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
RNN(
(embed): Embedding(20, 15)
(lstm): LSTM(15, 10, num_layers=2, batch_first=True, dropout=0.5)
(fc): Linear(in_features=10, out_features=10, bias=True)
)
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 15
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 300
# Hidden Dimension
hidden_dim = 512
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
RNN(
(embed): Embedding(21388, 300)
(lstm): LSTM(300, 512, num_layers=2, batch_first=True, dropout=0.5)
(fc): Linear(in_features=512, out_features=21388, bias=True)
)
Training for 15 epoch(s)...
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here)- num_epochs = 20 and 10- learning_rate = 0.001In previous exercices, this learning rate was set at 0.003 for Skip_Grams and 0.001 for Character_level.I tried with learning rate of 0.001 that seems quite good to start. The learning is slower, but we can observe the behaviour of optimization and if needed, I can stop the learning and increase the learning rate.Concerning number of epoch, I tried with 20 but training is very too slow and we reach less than 3.5 loss around epoch 3. I also tested with epoch 10 to get an overview of NN behaviour.Finally, concerning hidden_dim, I tested the values that have been seen during the training and they seems quite good. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
C:\Users\Damien\.conda\envs\pytorch_udacity\lib\site-packages\ipykernel_launcher.py:41: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_counts = Counter(text)
# sorting the words from most to least frequent in text occurrence
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
# create int_to_vocab dictionaries
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
tokens = dict()
tokens['.'] = '<PERIOD>'
tokens[','] = '<COMMA>'
tokens['"'] = '<QUOTATION_MARK>'
tokens[';'] = '<SEMICOLON>'
tokens['!'] = '<EXCLAMATION_MARK>'
tokens['?'] = '<QUESTION_MARK>'
tokens['('] = '<LEFT_PAREN>'
tokens[')'] = '<RIGHT_PAREN>'
tokens['?'] = '<QUESTION_MARK>'
tokens['-'] = '<DASH>'
tokens['\n'] = '<NEW_LINE>'
return tokens
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
y_len = len(words) - sequence_length
x, y = [], []
for idx in range(0, y_len):
idx_end = sequence_length + idx
x_batch = words[idx:idx_end]
x.append(x_batch)
# print("feature: ",x_batch)
batch_y = words[idx_end]
# print("target: ", batch_y)
y.append(batch_y)
# create Tensor datasets
data = TensorDataset(torch.from_numpy(np.asarray(x)), torch.from_numpy(np.asarray(y)))
# make sure the SHUFFLE your training data
data_loader = DataLoader(data, shuffle=False, batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]])
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
super(RNN, self).__init__()
# define embedding layer
self.embedding = nn.Embedding(vocab_size, embedding_dim)
## Define the LSTM
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# Define the final, fully-connected output layer
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
# embeddings and lstm_out
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
out = self.fc(lstm_out)
# reshape into (batch_size, seq_length, output_size)
out = out.view(batch_size, -1, self.output_size)
# get last batch
out = out[:, -1]
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
if(train_on_gpu):
rnn.cuda()
# # Creating new variables for the hidden state, otherwise
# # we'd backprop through the entire training history
h = tuple([each.data for each in hidden])
# zero accumulated gradients
rnn.zero_grad()
if(train_on_gpu):
inputs, target = inp.cuda(), target.cuda()
# print(h[0].data)
# get predicted outputs
output, h = rnn(inputs, h)
# calculate loss
loss = criterion(output, target)
# optimizer.zero_grad()
loss.backward()
# 'clip_grad_norm' helps prevent the exploding gradient problem in RNNs / LSTMs
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
return loss.item(), h
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 250
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 2000
print(len(vocab_to_int))
###Output
21388
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 4.969469286084175
Epoch: 1/10 Loss: 4.530352792859078
Epoch: 1/10 Loss: 4.382466664671898
Epoch: 2/10 Loss: 4.134338384850649
Epoch: 2/10 Loss: 3.972323559641838
Epoch: 2/10 Loss: 3.9209044603109358
Epoch: 3/10 Loss: 3.817668427113253
Epoch: 3/10 Loss: 3.7509066389799117
Epoch: 3/10 Loss: 3.723487420916557
Epoch: 4/10 Loss: 3.651983377184829
Epoch: 4/10 Loss: 3.6104639555215834
Epoch: 4/10 Loss: 3.583416987538338
Epoch: 5/10 Loss: 3.531450895366643
Epoch: 5/10 Loss: 3.5065197635889054
Epoch: 5/10 Loss: 3.4801937156915663
Epoch: 6/10 Loss: 3.451787300427969
Epoch: 6/10 Loss: 3.427888125538826
Epoch: 6/10 Loss: 3.402125217318535
Epoch: 7/10 Loss: 3.3790099206317787
Epoch: 7/10 Loss: 3.3646440657377243
Epoch: 7/10 Loss: 3.3423242206573485
Epoch: 8/10 Loss: 3.3285066263694967
Epoch: 8/10 Loss: 3.3073820313215254
Epoch: 8/10 Loss: 3.292244491219521
Epoch: 9/10 Loss: 3.2856493308698393
Epoch: 9/10 Loss: 3.2640193623304365
Epoch: 9/10 Loss: 3.2502937450408935
Epoch: 10/10 Loss: 3.249068915361985
Epoch: 10/10 Loss: 3.223403890252113
Epoch: 10/10 Loss: 3.2125546483993532
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here)Going over the course material regarding embedding, I noticed that typical embedding dimensions are around 200 - 300 in size.I tried:* sequence_length = 10, batch_size = 128, learning_rate = 0.001, embedding_dim = 200, hidden_dim = 250, n_layers = 2 Started with: Training for 10 epoch(s)... Epoch: 1/10 Loss: 4.944083527803421 ... Epoch: 4/10 Loss: 3.5780555000305174 ... Epoch: 7/10 Loss: 3.3266124720573425 ...* sequence_length = 10, batch_size = 124, learning_rate = 0.1, embedding_dim = 200, hidden_dim = 200, n_layers = 2 Started with Training for 10 epoch(s)... Epoch: 1/10 Loss: 5.481069218158722 Epoch: 2/10 Loss: 5.025624033570289 Epoch: 3/10 Loss: 4.981013494968415I stopped here, because, even if it was decreasing it seemd to converge way slower than the previous experiment with a lower learning rate and a slightly bigger hidden_dim.* The first experiment above reached: Epoch: 10/10 Loss: 3.2125546483993532. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:45: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
import collections
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# Count and sort the corpus
word_counts = collections.Counter(text)
sorted_counts = word_counts.most_common()
# create the look up dictionaries
int_to_vocab = {n: word_tuple[0] for n, word_tuple in enumerate(sorted_counts)}
vocab_to_int = {word: n for n, word in int_to_vocab.items()}
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
return {
'.': '||period||',
',': '||come||',
'"': '||doublequote||',
';': '||semicolon||',
'!': '||exclamation||',
'?': '||questionmark||',
'(': '||lparenth||',
')': '||rparenth||',
'-': '||dash||',
'\n': '||newline||'
}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import numpy as np
import torch.nn as nn
import helper
import problem_unittests as tests
from tqdm import tqdm
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# get number of targets we can make (must create full sequences)
n_targets = len(words) - sequence_length
# create the targets and features
features, targets = [], []
for i in range(n_targets):
features.append(words[i : i+sequence_length])
targets.append(words[i+sequence_length])
# convert Python list to PyTroch Tensors
features, targets = np.asarray(features), np.asarray(targets)
features, targets = torch.from_numpy(features), torch.from_numpy(targets)
# instanciate PyTorch's dataset class and DataLoader
dataset = TensorDataset(features, targets)
dataloader = DataLoader(dataset, shuffle=True, batch_size=batch_size)
return dataloader
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[41, 42, 43, 44, 45],
[43, 44, 45, 46, 47],
[ 8, 9, 10, 11, 12],
[27, 28, 29, 30, 31],
[38, 39, 40, 41, 42],
[ 5, 6, 7, 8, 9],
[ 3, 4, 5, 6, 7],
[37, 38, 39, 40, 41],
[29, 30, 31, 32, 33],
[20, 21, 22, 23, 24]], dtype=torch.int32)
torch.Size([10])
tensor([46, 48, 13, 32, 43, 10, 8, 42, 34, 25], dtype=torch.int32)
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
import torch.nn.functional as functional
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# init hidden weights params
self.n_layers = n_layers
self.hidden_dim = hidden_dim
self.vocab_size = vocab_size
# define the embedding layer
self.embedding = nn.Embedding(vocab_size, embedding_dim)
# define the LSTM layer
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,
dropout=dropout, batch_first=True)
# define fully-connected layer
self.dense = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# ensure embedding layer gets a LongTensor input
nn_input = nn_input.long()
# get the batch size for reshaping
batch_size = nn_input.size(0)
## define forward pass
embed = self.embedding(nn_input)
output, state = self.lstm(embed, hidden)
# stack LSTM
output = output.contiguous().view(-1, self.hidden_dim)
# pass through last fully connected layer
output = self.dense(output)
output = output.view(batch_size, -1, self.vocab_size)
output = output[:, -1] # save only the last output
# return one batch of output word scores and the hidden state
return output, state
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Create two new tensors with sizes n_layers x batch_size x n_hidden,
# initialized to zero, for hidden state and cell state of LSTM
weight = next(self.parameters()).data
if (torch.cuda.is_available()): #
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# move data to GPU, if available
rnn.to(device)
inp, target = inp.to(device), target.to(device)
# dismember the hidden states to prevent backprop through entire training history
hidden = tuple([hid.data for hid in hidden])
# zero accumulated gradients
rnn.zero_grad()
# get the output and hidden state from the model
output, hidden = rnn(inp, hidden)
# calcualte the loss
loss = criterion(output.squeeze(), target.long())
# perform backpropagation
loss.backward()
# clip to prevent gradients from becoming too large before optimizating
# nn.utils.clip_grad_norm_(rnn.parameters(), 4) # CLIPS POST OPTIMIZING ACCORDING TO THE DOCS
nn.utils.clip_grad_value_(rnn.parameters(), 4)
optimizer.step()
# ensure everything is sent back to cpu processing
rnn.to('cpu')
inp, target = inp.to('cpu'), target.to('cpu')
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in tqdm(range(1, n_epochs + 1)):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 6 #13 # of words in a sequence
# Batch Size
batch_size = 32 # 32
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 30 #
# Learning Rate
learning_rate = 0.0005 # 0.001 0.002
# Model parameters
# Vocab size
vocab_size = len(int_to_vocab)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 400 # 300
# Hidden Dimension
hidden_dim = 512
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
# continue training :(
# save states
state = {'epoch': num_epochs + 1, 'state_dict': trained_rnn.state_dict(),
'optimizer': optimizer.state_dict()}
filename = 'trained30_rnn.pt'
torch.save(state, filename)
model = rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
opt = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
def load_checkpoint(model, optimizer, filename):
'''
Note: Input model & optimizer should be pre-defined. This routine only updates their states.
'''
start_epoch = 0
checkpoint = torch.load(filename)
start_epoch = checkpoint['epoch']
model.load_state_dict(checkpoint['state_dict'])
optimizer.load_state_dict(checkpoint['optimizer'])
return model, optimizer, start_epoch
model, opt, start_epoch = load_checkpoint(model, optimizer, filename=filename)
device = 'cpu' # avoiding device erros
model = model.to(device)
# now individually transfer the optimizer parts...
for state in opt.state.values():
for k, v in state.items():
if isinstance(v, torch.Tensor):
state[k] = v.to(device)
# training the model
model = train_rnn(model, batch_size, opt, criterion, 2, show_every_n_batches)
state = {'epoch': 33, 'state_dict': model.state_dict(), 'optimizer': opt.state_dict()}
filename = 'trained_rnn32.pt'
torch.save(state, filename)
helper.save_model('./save/trained_rnn', model)
###Output
_____no_output_____
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** * __Sequence length__: In the end I chose 6 word sequences. At first, I tried 13 words per sequence. However, looking at the average words per line (5~), I thought I could have made it shorter. So on the next iteration, I went with 8. Then, I reduce it to 6 to be closer to the average words per line in the original script and hoped the model would train faster. * __Batch size__: I followed Jay's advice on that. I started with 32, as, historically speaking, it is the smallest size to yield good results. * __Learning Rate__: Typically, low and slow wins the race; hence, I chose 0.001. _note: the loss dropped drastically on the 10th epoch. However, it was still around 3.6_ Then, I doubled the learn rate (0.002), hoping for faster convergence. However, I still did not get the results I was hoping for. So, I tried to lower the the learn rate to 0.0005. * __Embedding dimensions__: Jay's lectures recommended 200 - 500 hidden dimensions work well in most cases. Although this seemed like a seemingly easy task, I decided to try 300. On the next iteration, I tried 400 and it did better.* __Hidden dimension__: There are approximately 21 thousand unique words in the dataset. Since this is my first time solving this problem, I decided to be generous with the hidden layer size (516 hidden units). Or, so I think. Also, I wanted the hidden layer to be larger than the embedding layer. * __Number of layers__: Empirically speaking, deeper networks function better. For this specific use case, where overfitting is not an issue, three seemed like a good option. However, I trained this model locally, and my GPU doesn't have enough memory. So, Three is too much, one did not seem enough. Thus, two seemed good to me> While it is not theoretically clear what is the additional power gained by the deeper architecture, it was observed empirically that deep RNNs work better than shallower ones on some tasks. In particular, Sutskever et al. (2014) report that a 4-layers deep architecture was crucial in achieving good machine-translation performance in an encoder-decoder framework. Irsoy and Cardie (2014) also report improved results from moving from a one-layer biRNN to an architecture with several layers. Many other works report result using layered RNN architectures, but do not explicitly compare to 1-layer RNNs --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
# if train_on_gpu:
# current_seq = torch.LongTensor(current_seq).cuda()
# else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
hidden = [hid.to('cpu') for hid in hidden]
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'newman' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(model, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
newman: brody, i think we can get him out of here.
jerry:(reading) gary fogel saw her dude cry.
kramer: oh, you know what, let's go.
jerry: oh, i can't believe i'm gonna get away from the subway!
jerry: i don't think so.
kramer: yeah, well.
jerry: oh, no. no, i'm afraid i can't help him.
jerry: well, you should see a doctor?
kramer: yeah, i guess.
elaine: i can't believe it, i can't believe this.
jerry: you think she'd hit the friendship.
jerry: well, what are you gonna do?
george: i don't know. you can't stand around.
elaine: oh.
jerry:(confused) aww.(holds his nose up)
kramer: well it's not a date, it was the only issue you've ever seen for yourself, huh?
jerry: no, no, i'm afraid he's gonna call you.
elaine:(sighs) well, i think it's fantastic.
jerry: oh, come on, george.
jerry: yeah, i know.
george: well you didn't mention to me that i would possibly care enough for that kind of crap.
george: well, you know what this means, but it's only used to be an actress.
kramer: hey, i know what i do. i'm feelin' kidding. i'm aware of this, it's all white.
jerry:(confused) what're you saying?
kramer: well, i don't know what it means, but you gotta finish it, it's not yours. it's just a little burning.
jerry: oh, i don't wanna go see him.
elaine: oh, i think we should do something.
jerry: yeah, well, you know, i don't know.
george: well, i don't know.
elaine:(handing the bottle back) oh, my god!(
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
_____no_output_____
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
# return tuple
return (None, None)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
_____no_output_____
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
_____no_output_____
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# return a dataloader
return None
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
_____no_output_____
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
# define model layers
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# return one batch of output word scores and the hidden state
return None, None
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
_____no_output_____
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
# perform backpropagation and optimization
# return the loss over a batch and the hidden state produced by our model
return None, None
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
_____no_output_____
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = # of words in a sequence
# Batch Size
batch_size =
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs =
# Learning Rate
learning_rate =
# Model parameters
# Vocab size
vocab_size =
# Output size
output_size =
# Embedding Dimension
embedding_dim =
# Hidden Dimension
hidden_dim =
# Number of RNN Layers
n_layers =
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
_____no_output_____
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
_____no_output_____
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (10, 20)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 10 to 20:
jerry: oh, you dont recall?
george: (on an imaginary microphone) uh, no, not at this time.
jerry: well, senator, id just like to know, what you knew and when you knew it.
claire: mr. seinfeld. mr. costanza.
george: are, are you sure this is decaf? wheres the orange indicator?
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_counts = Counter(text)
sorted_words = sorted(word_counts, key=word_counts.get, reverse=True)
int_to_vocab = {i : word for i, word in enumerate(sorted_words)}
vocab_to_int = {word : i for i, word in int_to_vocab.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return {
'.' : '||PERIOD||',
',' : '||COMMA||',
'"' : '||QUOTATION_MARK||',
';' : '||SEMICOLON||',
'?' : '||QUESTION_MARK||',
'!' : '||EXCLAMATION_MARK||',
'(' : '||LEFT_PARENTHESES||',
')' : '||RIGHT_PARENTHESES||',
'-' : '||DASH||',
'\n' : '||RETURN||',
}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
import numpy as np
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
features = []
targets = []
for i in range(0, len(words) - sequence_length):
features.append(words[i:i + sequence_length])
targets.append(words[i + sequence_length])
features = torch.tensor(np.array(features))
targets = torch.tensor(np.array(targets))
data = TensorDataset(features, targets)
return DataLoader(data, shuffle=True, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 4, 5, 6, 7, 8],
[ 6, 7, 8, 9, 10],
[30, 31, 32, 33, 34],
[25, 26, 27, 28, 29],
[23, 24, 25, 26, 27],
[15, 16, 17, 18, 19],
[29, 30, 31, 32, 33],
[44, 45, 46, 47, 48],
[16, 17, 18, 19, 20],
[14, 15, 16, 17, 18]], dtype=torch.int32)
torch.Size([10])
tensor([ 9, 11, 35, 30, 28, 20, 34, 49, 21, 19], dtype=torch.int32)
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
self.dropout = nn.Dropout(dropout)
self.fc = nn.Linear(hidden_dim, self.output_size)
# self.sig = nn.LogSigmoid()
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
nn_input = nn_input.long()
embeds = self.embedding(nn_input)
output, hidden = self.lstm(embeds, hidden)
output = output.contiguous().view(-1, self.hidden_dim)
output = self.dropout(output)
output = self.fc(output)
#output = self.sig(output)
output = output.view(batch_size, -1, self.output_size)
output = output[:, -1] # get last batch of labels
# return one batch of output word scores and the hidden state
return output, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
clip=5 # gradient clipping
# move data to GPU, if available
if (train_on_gpu):
inp, target = inp.cuda(), target.cuda()
hidden = tuple([each.data for each in hidden])
rnn.zero_grad()
output, hidden = rnn(inp, hidden)
# perform backpropagation and optimization
loss = criterion(output.squeeze(1), target.long())
loss.backward()
# Clip gradians to avoid exploging gradient
#nn.utils.clip_grad_norm_(rnn.parameters(), clip)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 #5 # of words in a sequence
# Batch Size
batch_size = 512 # 256
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 20
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(int_to_vocab)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 64
# Hidden Dimension
hidden_dim = 1024
# Number of RNN Layers
n_layers = 2 # 3 recommanded 4 for machine learning
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 20 epoch(s)...
Epoch: 1/20 Loss: 5.075988801002502
Epoch: 1/20 Loss: 4.497443925857544
Epoch: 1/20 Loss: 4.32460941696167
Epoch: 2/20 Loss: 4.147069308957
Epoch: 2/20 Loss: 4.052237694740295
Epoch: 2/20 Loss: 4.024495181560517
Epoch: 3/20 Loss: 3.9154780662927346
Epoch: 3/20 Loss: 3.8625277924537658
Epoch: 3/20 Loss: 3.855442501068115
Epoch: 4/20 Loss: 3.753882806577451
Epoch: 4/20 Loss: 3.7122063593864443
Epoch: 4/20 Loss: 3.734294280052185
Epoch: 5/20 Loss: 3.615049396563733
Epoch: 5/20 Loss: 3.599308240890503
Epoch: 5/20 Loss: 3.6082874011993407
Epoch: 6/20 Loss: 3.5080363567306025
Epoch: 6/20 Loss: 3.478867627620697
Epoch: 6/20 Loss: 3.5055536317825315
Epoch: 7/20 Loss: 3.397111703443399
Epoch: 7/20 Loss: 3.382487844467163
Epoch: 7/20 Loss: 3.408043267726898
Epoch: 8/20 Loss: 3.305638824511731
Epoch: 8/20 Loss: 3.2897962164878845
Epoch: 8/20 Loss: 3.326266335487366
Epoch: 9/20 Loss: 3.2177110055707536
Epoch: 9/20 Loss: 3.2096406874656678
Epoch: 9/20 Loss: 3.2464095067977907
Epoch: 10/20 Loss: 3.144965614912645
Epoch: 10/20 Loss: 3.142004195690155
Epoch: 10/20 Loss: 3.174967270374298
Epoch: 11/20 Loss: 3.073767374467978
Epoch: 11/20 Loss: 3.0758746700286865
Epoch: 11/20 Loss: 3.109230837345123
Epoch: 12/20 Loss: 3.016540420987214
Epoch: 12/20 Loss: 3.0177016353607176
Epoch: 12/20 Loss: 3.0626930956840517
Epoch: 13/20 Loss: 2.95733917305733
Epoch: 13/20 Loss: 2.9586856560707093
Epoch: 13/20 Loss: 3.0113649258613586
Epoch: 14/20 Loss: 2.913266467919568
Epoch: 14/20 Loss: 2.9096540298461915
Epoch: 14/20 Loss: 2.966789579868317
Epoch: 15/20 Loss: 2.8669572882253846
Epoch: 15/20 Loss: 2.870712673187256
Epoch: 15/20 Loss: 2.9312428784370423
Epoch: 16/20 Loss: 2.834968273851749
Epoch: 16/20 Loss: 2.837061097621918
Epoch: 16/20 Loss: 2.8833885049819945
Epoch: 17/20 Loss: 2.7932560180396724
Epoch: 17/20 Loss: 2.8028618521690367
Epoch: 17/20 Loss: 2.852114058017731
Epoch: 18/20 Loss: 2.7596370524794587
Epoch: 18/20 Loss: 2.771130582332611
Epoch: 18/20 Loss: 2.819936710834503
Epoch: 19/20 Loss: 2.7356518323852046
Epoch: 19/20 Loss: 2.7389076228141787
Epoch: 19/20 Loss: 2.7890029497146607
Epoch: 20/20 Loss: 2.7045901573571878
Epoch: 20/20 Loss: 2.720527849674225
Epoch: 20/20 Loss: 2.7680976309776306
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** I expected smaller sequence_length to converge faster because intuitively it seems shorter sequences should be simpler to learn than longer. However this was not the case with all other hyperparameters held constant changing the sequence length from 5 to 10 changed the loss from 2.98 to 2.77 over 20 epochs. Perhaps this is the "long term memory" that affect the convergence rate?I was training with a very small batch size of 5 and change it to 512 which caused error loss to decrease significantly.I was able to achived a loss of 2.34 after 10 epochs with 2 Layers. I read the machine translation hyperparameters document (https://arxiv.org/abs/1409.3215) which used 4 layers. However when I tried to use 4 layers my loss increased significantly.The same paper also used embedding size of 1000 I had much better results with 64. Maybe this makes my model train to look for less patterns (more words are grouped together) and if I was able to train with larger embedding size the output text would be more coherant. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
jerry: i think that would be a great idea for this kind of exposure thing i got to go in there.
jerry: what?
kramer:(shouts) oh, the mohel.
elaine:(quietly) what is that noise?
kramer:(pulling out the timepiece) it's a good game. it's the one that left the top on you?
elaine:(laughs) no, i can't. i gotta get going.
jerry:(to jerry) hey, how about a mute?(takes a bite of the sandwich.)
elaine: so, you want to come upstairs? you gotta go to the bathroom.
jerry: i thought we were talking about this.
jerry:(to kramer) i told you.
george: what?
kramer: you got it?
jerry: i don't know.
elaine:(to the phone) yeah, yeah, yeah, right.
elaine: you know, i don't think this is funny.
jerry:(jokingly) yeah, yeah, right.
jerry: what?
kramer: yeah, i washed.
kramer: hey.(to jerry) so, how are you gonna be the executor of this?
george: what are you doing?
jerry: well, i'm not going to be able to find a hotel room, okay?(jerry nods.) well, i don't think so.
george: what do you mean?
jerry: you know, i don't know...(looks around)
jerry: i don't want you to get me a job.(to jerry) so what do you do in this shirt?
jerry: i don't know.
kramer: well, it's not a purse. it's a little lo...
kramer:(pointing to the kitchen) hey, hey!
elaine: hey.(to elaine) what is this?
jerry:(to kramer) hey!
george:(on tape)
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
# return tuple
return (None, None)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
_____no_output_____
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
_____no_output_____
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# return a dataloader
return None
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
_____no_output_____
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
# define model layers
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# return one batch of output word scores and the hidden state
return None, None
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
_____no_output_____
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
# perform backpropagation and optimization
# return the loss over a batch and the hidden state produced by our model
return None, None
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
_____no_output_____
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = # of words in a sequence
# Batch Size
batch_size =
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs =
# Learning Rate
learning_rate =
# Model parameters
# Vocab size
vocab_size =
# Output size
output_size =
# Embedding Dimension
embedding_dim =
# Hidden Dimension
hidden_dim =
# Number of RNN Layers
n_layers =
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
_____no_output_____
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
_____no_output_____
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_counts = Counter(text)
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
# create int_to_vocab dictionaries
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
punc_tokens = {'.':'<PERIOD>',
',':'<COMMA>',
'"':'<QUOTATION_MARK>',
';':'<SEMICOLON>',
'!':'<EXCLAMATION_MARK>',
'?':'<QUESTION_MARK>',
'(':'<LEFT_PAREN>',
')':'<RIGHT_PAREN>',
'-':'<HYPHEN>',
'\n':'<HYPHENS>'}
return punc_tokens
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
import numpy as np
import torch
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
#num_batches = int(len(words)/sequence_length)
num_batches = len(words)-sequence_length
feature_tensors = np.zeros(shape=(num_batches,sequence_length)).astype(int)
target_tensors = np.zeros(shape=(num_batches)).astype(int)
# This clips off any end values that don't fit into a batch
print(batch_size)
for batch_idx in range(num_batches):
for word_idx in range (sequence_length):
#print("batchidx:", batch_idx, " wordidx:",word_idx,"seqL:",sequence_length)
feature_tensors[batch_idx][word_idx] = words[(batch_idx)+word_idx]
#print(feature_tensors)
target_tensors[batch_idx] = words[batch_idx + sequence_length]
#print(target_tensors)
#print("feature size: ", feature_tensors.shape)
feature_tensors = torch.Tensor(feature_tensors).int()
target_tensors = torch.Tensor(target_tensors).int()
#print(feature_tensors)
#print(target_tensors)
data = TensorDataset(feature_tensors, target_tensors)
data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
batch_data([1,2,3,4,5,6,7,8,9,10], 4, 1)
###Output
1
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
print(str(test_text))
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
range(0, 50)
10
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]], dtype=torch.int32)
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], dtype=torch.int32)
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
import torch
import torch.nn.functional as F
import torch.optim as optim
from torchsummary import summary
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
self.vocab_size = vocab_size
self.output_size = output_size
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.n_layers = n_layers
self.dropout = dropout
#print(n_layers)
# define model layers
self.embed = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, batch_first=True, num_layers = n_layers, dropout = dropout)
self.fc_out = nn.Linear(hidden_dim, output_size)
#self.dropout = nn.Dropout(dropout)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
embeddings = self.embed(nn_input.long())
#embeddings = self.dropout(embeddings)
lstm_out, (hidden_state, cell_state) = self.lstm(embeddings, hidden)
#lstm_out = self.dropout(lstm_out)
tag_space = self.fc_out(lstm_out.contiguous().view(-1, self.hidden_dim)) # This is probably wrong
#tag_space = self.dropout(tag_space)
out = F.log_softmax(tag_space, dim=1)
output = out.view(self.batch_size, -1, self.output_size)
# get last batch
output = output[:, -1]
# return one batch of output word scores and the hidden state
return output, (hidden_state, cell_state)
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
self.batch_size = batch_size
if (torch.cuda.is_available()):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
## To reviwer, why did the below code not work? ##
# hidden_state = weight.new(self.n_layers, batch_size, self.hidden_dim).zero_()
# cell_state = weight.new(self.n_layers, batch_size, self.hidden_dim).zero_()
# if (torch.cuda.is_available()):
# hidden_state = hidden_state.cuda()
# hidden_state = cell_state.cuda()
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
#summary(RNN, (50,3))
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if torch.cuda.is_available():
rnn = rnn.cuda()
target = target.cuda()
inp = inp.cuda()
for element in hidden:
element.cuda()
hidden = hidden[0].data, hidden[1].data
# perform backpropagation and optimization
optimizer.zero_grad()
probabilities, new_hidden = rnn(inp, hidden)
loss = criterion(probabilities.squeeze(), target.long())
loss.backward()
optimizer.step()
loss = loss.item()
#print("loss is: ", loss)
# return the loss over a batch and the hidden state produced by our model
return loss, new_hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 100
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 15
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 300
# Hidden Dimension
hidden_dim = 512
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 15 epoch(s)...
Epoch: 1/15 Loss: 5.363724740028381
Epoch: 1/15 Loss: 4.888778426647186
Epoch: 1/15 Loss: 4.602891879081726
Epoch: 1/15 Loss: 4.440466387271881
Epoch: 1/15 Loss: 4.315282370567322
Epoch: 1/15 Loss: 4.439320840358734
Epoch: 1/15 Loss: 4.374251626968384
Epoch: 1/15 Loss: 4.425193030357361
Epoch: 1/15 Loss: 4.306451050758362
Epoch: 1/15 Loss: 4.270074176311493
Epoch: 1/15 Loss: 4.1075156888961795
Epoch: 1/15 Loss: 4.242189838409423
Epoch: 1/15 Loss: 4.165516932964325
Epoch: 1/15 Loss: 4.256982106685639
Epoch: 1/15 Loss: 4.319668600082397
Epoch: 1/15 Loss: 4.281853661537171
Epoch: 1/15 Loss: 4.277990152359009
Epoch: 2/15 Loss: 4.095853758427789
Epoch: 2/15 Loss: 3.9387496814727783
Epoch: 2/15 Loss: 3.855009229660034
Epoch: 2/15 Loss: 3.779229765892029
Epoch: 2/15 Loss: 3.708533215999603
Epoch: 2/15 Loss: 3.829554548740387
Epoch: 2/15 Loss: 3.8201945214271547
Epoch: 2/15 Loss: 3.8864864454269408
Epoch: 2/15 Loss: 3.8170689125061035
Epoch: 2/15 Loss: 3.7649069638252257
Epoch: 2/15 Loss: 3.6645705289840698
Epoch: 2/15 Loss: 3.768539720535278
Epoch: 2/15 Loss: 3.737532067298889
Epoch: 2/15 Loss: 3.8175901894569395
Epoch: 2/15 Loss: 3.853895040512085
Epoch: 2/15 Loss: 3.834243857860565
Epoch: 2/15 Loss: 3.856651794910431
Epoch: 3/15 Loss: 3.7600109248415006
Epoch: 3/15 Loss: 3.6702586903572083
Epoch: 3/15 Loss: 3.6068032207489016
Epoch: 3/15 Loss: 3.5446518363952637
Epoch: 3/15 Loss: 3.5035154733657836
Epoch: 3/15 Loss: 3.5842297101020812
Epoch: 3/15 Loss: 3.5798610644340516
Epoch: 3/15 Loss: 3.6674898381233216
Epoch: 3/15 Loss: 3.5808813018798826
Epoch: 3/15 Loss: 3.5284850821495057
Epoch: 3/15 Loss: 3.453301142692566
Epoch: 3/15 Loss: 3.540464120388031
Epoch: 3/15 Loss: 3.528485816001892
Epoch: 3/15 Loss: 3.591624412059784
Epoch: 3/15 Loss: 3.6339498777389525
Epoch: 3/15 Loss: 3.6089256463050843
Epoch: 3/15 Loss: 3.614502366065979
Epoch: 4/15 Loss: 3.5568797748847323
Epoch: 4/15 Loss: 3.4951449093818665
Epoch: 4/15 Loss: 3.432937166213989
Epoch: 4/15 Loss: 3.3882588233947755
Epoch: 4/15 Loss: 3.35321679019928
Epoch: 4/15 Loss: 3.3928859539031984
Epoch: 4/15 Loss: 3.4175438389778137
Epoch: 4/15 Loss: 3.509397229671478
Epoch: 4/15 Loss: 3.4073427720069884
Epoch: 4/15 Loss: 3.367028179168701
Epoch: 4/15 Loss: 3.3141687455177307
Epoch: 4/15 Loss: 3.3738673095703127
Epoch: 4/15 Loss: 3.3895227723121644
Epoch: 4/15 Loss: 3.4287908387184145
Epoch: 4/15 Loss: 3.464225102901459
Epoch: 4/15 Loss: 3.4413810710906985
Epoch: 4/15 Loss: 3.460006271839142
Epoch: 5/15 Loss: 3.4184037168691264
Epoch: 5/15 Loss: 3.3845978441238405
Epoch: 5/15 Loss: 3.3043018698692324
Epoch: 5/15 Loss: 3.2577729382514953
Epoch: 5/15 Loss: 3.247817492723465
Epoch: 5/15 Loss: 3.2630201330184936
Epoch: 5/15 Loss: 3.288782137393951
Epoch: 5/15 Loss: 3.377908104419708
Epoch: 5/15 Loss: 3.2936730494499207
Epoch: 5/15 Loss: 3.2489995722770693
Epoch: 5/15 Loss: 3.21657404756546
Epoch: 5/15 Loss: 3.265457293510437
Epoch: 5/15 Loss: 3.274803861618042
Epoch: 5/15 Loss: 3.29999480009079
Epoch: 5/15 Loss: 3.338403857707977
Epoch: 5/15 Loss: 3.3065478682518004
Epoch: 5/15 Loss: 3.3153113703727723
Epoch: 6/15 Loss: 3.2984728061156217
Epoch: 6/15 Loss: 3.280026596069336
Epoch: 6/15 Loss: 3.206384714126587
Epoch: 6/15 Loss: 3.1578216185569765
Epoch: 6/15 Loss: 3.1524285202026365
Epoch: 6/15 Loss: 3.1646585154533384
Epoch: 6/15 Loss: 3.209417939186096
Epoch: 6/15 Loss: 3.2608956065177916
Epoch: 6/15 Loss: 3.1965203328132628
Epoch: 6/15 Loss: 3.161787514209747
Epoch: 6/15 Loss: 3.1257205181121828
Epoch: 6/15 Loss: 3.1545133209228515
Epoch: 6/15 Loss: 3.179970184803009
Epoch: 6/15 Loss: 3.199390508174896
Epoch: 6/15 Loss: 3.2460207538604737
Epoch: 6/15 Loss: 3.223746827125549
Epoch: 6/15 Loss: 3.2162142028808596
Epoch: 7/15 Loss: 3.210596465650262
Epoch: 7/15 Loss: 3.191326835155487
Epoch: 7/15 Loss: 3.1359806246757507
Epoch: 7/15 Loss: 3.0845880403518677
Epoch: 7/15 Loss: 3.082687391757965
Epoch: 7/15 Loss: 3.073760934829712
Epoch: 7/15 Loss: 3.124136496067047
Epoch: 7/15 Loss: 3.176078475475311
Epoch: 7/15 Loss: 3.1255447597503663
Epoch: 7/15 Loss: 3.0919962391853333
Epoch: 7/15 Loss: 3.0544187150001525
Epoch: 7/15 Loss: 3.072029559135437
Epoch: 7/15 Loss: 3.100132073402405
Epoch: 7/15 Loss: 3.127228157043457
Epoch: 7/15 Loss: 3.170003378868103
Epoch: 7/15 Loss: 3.1392071042060854
Epoch: 7/15 Loss: 3.137432764530182
Epoch: 8/15 Loss: 3.1422166401089595
Epoch: 8/15 Loss: 3.125530011177063
Epoch: 8/15 Loss: 3.064560781955719
Epoch: 8/15 Loss: 3.028349506378174
Epoch: 8/15 Loss: 3.02254749751091
Epoch: 8/15 Loss: 3.00223459815979
Epoch: 8/15 Loss: 3.0599766712188723
Epoch: 8/15 Loss: 3.1133778076171876
Epoch: 8/15 Loss: 3.063219702243805
Epoch: 8/15 Loss: 3.030902289867401
Epoch: 8/15 Loss: 3.0022066388130186
Epoch: 8/15 Loss: 3.0039333429336548
Epoch: 8/15 Loss: 3.0490253419876097
Epoch: 8/15 Loss: 3.0613285880088807
Epoch: 8/15 Loss: 3.1253976950645446
Epoch: 8/15 Loss: 3.082568682193756
Epoch: 8/15 Loss: 3.0767971205711366
Epoch: 9/15 Loss: 3.0788947708831937
Epoch: 9/15 Loss: 3.0670766415596007
Epoch: 9/15 Loss: 3.0165445742607115
Epoch: 9/15 Loss: 2.97663409948349
Epoch: 9/15 Loss: 2.9736308569908143
Epoch: 9/15 Loss: 2.9499198145866394
Epoch: 9/15 Loss: 3.0003385348320006
Epoch: 9/15 Loss: 3.051873701095581
Epoch: 9/15 Loss: 3.0140964221954345
Epoch: 9/15 Loss: 2.9725867652893068
Epoch: 9/15 Loss: 2.953563188076019
Epoch: 9/15 Loss: 2.9546047863960267
Epoch: 9/15 Loss: 2.9902475590705873
Epoch: 9/15 Loss: 3.003370816707611
Epoch: 9/15 Loss: 3.0561471509933473
Epoch: 9/15 Loss: 3.0176299023628235
Epoch: 9/15 Loss: 3.021928065776825
Epoch: 10/15 Loss: 3.0281915571479923
Epoch: 10/15 Loss: 3.012572217941284
Epoch: 10/15 Loss: 2.9753078241348265
Epoch: 10/15 Loss: 2.931376220703125
Epoch: 10/15 Loss: 2.9271128175258636
Epoch: 10/15 Loss: 2.894703689098358
Epoch: 10/15 Loss: 2.9489109396934508
Epoch: 10/15 Loss: 3.002216926574707
Epoch: 10/15 Loss: 2.9645584416389466
Epoch: 10/15 Loss: 2.9409907698631286
Epoch: 10/15 Loss: 2.9279688477516173
Epoch: 10/15 Loss: 2.921959735393524
Epoch: 10/15 Loss: 2.9449917345046996
Epoch: 10/15 Loss: 2.954298352956772
Epoch: 10/15 Loss: 3.0018203473091125
Epoch: 10/15 Loss: 2.964821640253067
Epoch: 10/15 Loss: 2.969308503627777
Epoch: 11/15 Loss: 2.9766264803117055
Epoch: 11/15 Loss: 2.976940345287323
Epoch: 11/15 Loss: 2.923772484779358
Epoch: 11/15 Loss: 2.8901746935844423
Epoch: 11/15 Loss: 2.8889050569534303
Epoch: 11/15 Loss: 2.853283078670502
Epoch: 11/15 Loss: 2.904878322601318
Epoch: 11/15 Loss: 2.968543590545654
Epoch: 11/15 Loss: 2.9214342670440674
Epoch: 11/15 Loss: 2.8863476095199583
Epoch: 11/15 Loss: 2.869296367645264
Epoch: 11/15 Loss: 2.8579204139709473
Epoch: 11/15 Loss: 2.90662128162384
Epoch: 11/15 Loss: 2.9107823357582094
Epoch: 11/15 Loss: 2.9528587265014647
Epoch: 11/15 Loss: 2.9245549449920656
Epoch: 11/15 Loss: 2.93568341588974
Epoch: 12/15 Loss: 2.9402612914997643
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** I chose my sequence lenght based on the length of sentances in the data (which was 5-10) and paired it with a little of my own bias (longer was a little more interesting to me). My batch size was simply chosen as large I could make it with the cuda memory I had available. I used a learning rate consitent with what I've used in the past, and in the end had little reason to change it. My vocab and output size were chosen to be the length of the words in data, simply because I didn't want to omit words from the model's dictionary. My embedding_dim, hidden_dim, and n_layers were all chosen by a process of guess-n-check to be honest. I tried a few thousand for embedding and hidden, and a tried a few hundred. I found that having the _dim_ parameters too high hurt performance and trainability. As for n_layers, I initially tried 4 but found better results with 1 and tweaked up up to 2 layers after tuning my other parameters. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
##### Added next 2 lines to fix numpy bug ####
rnn.cpu()
current_seq = current_seq.cpu()
hidden = hidden[0].cpu(), hidden[1].cpu()
#print(" curs:", current_seq.device, "hidden[0]", hidden[0].device, "hidden[1]", hidden[1].device)
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
jerry: finally finally passed into the town, then they wouldn't agree.
estelle: so you called yourself a jerk.
jerry: i can't believe we had enough to do it.
george: i think we've seen the act before we finish buttons.
kramer: oh no no no no! i'm gonna borrow gum.
elaine: literally yourself beep. i don't want the gum anymore, so, the bubble boy was gone.
george: woo...
stu: heh.
secretary: ladies and gentlemen, gimme that rye.
jerry: so how's that boyfriend?
jerry: the contest was a sentence.
jerry: i thought we'd be taller.
newman: aww, heh heh heh heh.
secretary: hi everybody fold up here, mr. kramer?
jerry:(shrugging through the closed bag) oh, hi, mark.
stu: bye mrs. vandelay, elaine.
jerry: nbc certainly parked to your homes.
stu: oh.
newman:(shocked) wha- wha- what is that noise?
george: because it didn't end me out to rent.
jerry: ma, dad!
george: you think she's lookin'?
frank: yes, yes. i think that's a definite issue, but you cannot prove it.
morty: congratulations!
newman: woo!
elaine: hey! hold it, honey!
sales woman: oh no. i can't believe it is happening.
frank: mr. steinbrenner?
jerry: no one's bringing a letter trip to town.
jerry: oh, no problem.
estelle: so you broke up with that?
jerry: well, i'm sure it's burning policy.
morty: congratulations.
waitress: hi mom.
morty: hello stu.
secretary: hi everybody cop.
secretary: ladies and gentlemen, i can't afford to be successful, but i'm not going to get out together sometime.
george: ma.
both: ladies and
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_counts = Counter(text)
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
int2vocab = { i : w for i, w in enumerate(sorted_vocab)}
vocab2int = { w : i for i, w in int2vocab.items()}
# return tuple
return (vocab2int, int2vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token_dict = { '.': '<PERIOD>',
',': '<COMMA>',
'"': '<QUOTATION_MARK>',
';': '<SEMI_COLON>',
'!': '<EXCLAMATION_MARK>',
'?': '<QUESTION_MARK>',
'(': '<LEFT_PARENTHESIS>',
')': '<RIGHT_PARENTHESIS>',
'-': '<DASH>',
'\n': '<NEW_LINE>'}
return token_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
print('len(int_text):', len(int_text))
print('type(int_text):', type(int_text))
print('len(vocab_to_int):', len(vocab_to_int))
print('len(int_to_vocab):', len(int_to_vocab))
print('len(token_dict):', len(token_dict))
txt = []
for i in int_text[:10]:
txt.append(int_to_vocab[i])
txt
###Output
len(int_text): 892110
type(int_text): <class 'list'>
len(vocab_to_int): 21388
len(int_to_vocab): 21388
len(token_dict): 10
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
train_on_gpu
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
batch_size_total = batch_size * sequence_length
n_batches = len(words)//batch_size_total
# keep only enough words to make full batches
words = np.array(words[:n_batches * batch_size_total])
features, target = [], []
for i in range(0, len(words)-sequence_length):
features.append(words[i:i+sequence_length])
target.append(words[i+sequence_length])
train_data = TensorDataset(torch.from_numpy(np.asarray(features)), torch.from_numpy(np.asarray(target)))
train_loader = DataLoader(train_data, shuffle=True, batch_size=batch_size)
# return a dataloader
return train_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 30, 31, 32, 33, 34],
[ 36, 37, 38, 39, 40],
[ 13, 14, 15, 16, 17],
[ 37, 38, 39, 40, 41],
[ 44, 45, 46, 47, 48],
[ 6, 7, 8, 9, 10],
[ 25, 26, 27, 28, 29],
[ 0, 1, 2, 3, 4],
[ 7, 8, 9, 10, 11],
[ 28, 29, 30, 31, 32]])
torch.Size([10])
tensor([ 35, 41, 18, 42, 49, 11, 30, 5, 12, 33])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
embedding_output = self.embedding(nn_input)
lstm_output, hidden = self.lstm(embedding_output, hidden)
lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)
output = self.fc(lstm_output)
# reshape into (batch_size, seq_length, output_size)
output = output.view(batch_size, -1, self.output_size)
# get last batch
out = output[:, -1]
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# print(self.parameters())
weight = next(self.parameters()).data
# initialize hidden state with zero weights, and move to GPU if available
if train_on_gpu:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if train_on_gpu:
rnn.cuda()
# perform backpropagation and optimization
# reinitialize hidden variable to prevent backpropagation
h = tuple([each.data for each in hidden])
#zero accumulated gradients
rnn.zero_grad()
if train_on_gpu:
inp, target = inp.cuda(), target.cuda()
out, h = rnn(inp, h)
# calculate loss and perform backprop
loss = criterion(out, target)
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs/LSTMs
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), h
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 400
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 2000
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
import os
os.environ['CUDA_LAUNCH_BLOCKING'] = '1'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from workspace_utils import active_session
with active_session():
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
#print(rnn)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 4.843531439900398
Epoch: 1/10 Loss: 4.3292163555622105
Epoch: 1/10 Loss: 4.185644910812378
Epoch: 2/10 Loss: 4.001688238896805
Epoch: 2/10 Loss: 3.9097478613853456
Epoch: 3/10 Loss: 3.7823917873004835
Epoch: 3/10 Loss: 3.7440224170684813
Epoch: 3/10 Loss: 3.747664158701897
Epoch: 4/10 Loss: 3.656581620744613
Epoch: 4/10 Loss: 3.632077686071396
Epoch: 5/10 Loss: 3.5665080450643916
Epoch: 5/10 Loss: 3.54936509001255
Epoch: 5/10 Loss: 3.569948466539383
Epoch: 6/10 Loss: 3.4989693093114544
Epoch: 6/10 Loss: 3.482204401016235
Epoch: 6/10 Loss: 3.510458270072937
Epoch: 7/10 Loss: 3.4366343465355773
Epoch: 7/10 Loss: 3.434198954820633
Epoch: 7/10 Loss: 3.4671989839076995
Epoch: 8/10 Loss: 3.3946180754555524
Epoch: 8/10 Loss: 3.3873555065393446
Epoch: 8/10 Loss: 3.4223250226974486
Epoch: 9/10 Loss: 3.362420918692524
Epoch: 9/10 Loss: 3.3477510668039323
Epoch: 9/10 Loss: 3.3936511422395705
Epoch: 10/10 Loss: 3.3275063229638526
Epoch: 10/10 Loss: 3.3241239243745806
Epoch: 10/10 Loss: 3.3594459886550903
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:**To determine the values to use for the hyperparameters, I browsed thru several references linked in the RNN modules & the various RNN, LSTM exercises. In particular, the material covered in the Hyperparameters module (Lesson 4) & the following references were most helpful,[Deep Learning book - chapter 11.4](http://www.deeplearningbook.org/contents/guidelines.html): Selecting Hyperparameters by Ian Goodfellow, Yoshua Bengio, Aaron Courville[An Empirical Exploration of Recurrent Network Architectures](http://proceedings.mlr.press/v37/jozefowicz15.pdf) by Rafal Jozefowicz, Wojciech Zaremba, Ilya Sutskever which I found very fascinating. While selecting the hyperparameters to focus on, I kept in mind the following points from lesson 4,* the two most important parameters that control the model are n_hidden and n_layers. * Andrej Karpathy's recommendation to use n_layers of either 2/3 & adjust n_hidden based on how much data you have. We have about 800K words in all with a vocabulary of 21K.I trained the model using several combinations of model hyperparameters before settling for the values above. The values I used & my observation are as follows,* no of layers: 1, 2 Although not recommended, I noticed that the model converged the quickest to a loss of 3.22 using a single LSTM layer. I also tested this model & there was no significant difference in the script generated with this model & the one with 2 layers. In fact the model size was also smaller 41 MB vs 61 MB for the model with 2 layers used to generate the script below.* embedding dimensions: 200, 300, 400 Increasing this parameter increased the training time. No significant change in loss.* hidden dimensions: 256, 512 Increasing hidden dimensions increased the training time. No significant change in loss.* sequence lengths: 7, 10, 100 Since most of the lines in the script consist of a small no of words, I used a value of 10 for the submission. A value of 100 significantly increased the time to train - even after a training for a couple of hours, the loss did not reduce below 3.7.* learning rate: 0.01, 0.003, 0.001 --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 500 # modify the length to your preference
prime_word = 'kramer' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:40: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
!wget https://raw.githubusercontent.com/udacity/deep-learning-v2-pytorch/master/project-tv-script-generation/helper.py
!mkdir data
!wget -P data https://raw.githubusercontent.com/udacity/deep-learning-v2-pytorch/master/project-tv-script-generation/data/Seinfeld_Scripts.txt
!wget https://raw.githubusercontent.com/udacity/deep-learning-v2-pytorch/master/project-tv-script-generation/problem_unittests.py
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = 'data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
#Maps words to integers
counts = Counter(text)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {}
int_to_vocab = {}
for c, value in enumerate(vocab, 1):
vocab_to_int[value] = c
int_to_vocab[c] = value
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token_dic = {
".": "||Period||",
",": "||Comma||",
'"': "||Quotation_Mark||",
";": "||Semicolon||",
"!": "||Exclamation_Mark||",
"?": "||Question_Mark||",
"(": "||Left_Parentheses||",
")": "||Right_Parentheses||",
"-": "||Dash||",
"\n": "||Return||",
}
return token_dic
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
n_targets = len(words) - sequence_length
feature, target = [], []
for i in range(n_targets):
x = words[i : i+sequence_length] # get some words from the given list
y = words[i+sequence_length] # get the next word to be the target
feature.append(x)
target.append(y)
feature_tensor, target_tensor = torch.from_numpy(np.array(feature)), torch.from_numpy(np.array(target))
data = TensorDataset(feature_tensor, target_tensor)
dataloader = DataLoader(data, shuffle=True, batch_size=batch_size)
# return a dataloader
return dataloader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[32, 33, 34, 35, 36],
[11, 12, 13, 14, 15],
[14, 15, 16, 17, 18],
[ 8, 9, 10, 11, 12],
[12, 13, 14, 15, 16],
[ 4, 5, 6, 7, 8],
[23, 24, 25, 26, 27],
[17, 18, 19, 20, 21],
[35, 36, 37, 38, 39],
[27, 28, 29, 30, 31]])
torch.Size([10])
tensor([37, 16, 19, 13, 17, 9, 28, 22, 40, 32])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,
dropout=dropout, batch_first=True)
#self.dropout = nn.Dropout(0.3)
self.fc = nn.Linear(hidden_dim, output_size)
self.sig = nn.Sigmoid()
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
batch_size = nn_input.size(0)
#embeddings and lstm_out
x = nn_input.long()
embeds = self.embedding(x)
lstm_out, hidden = self.lstm(embeds, hidden)
#stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
#out = self.dropout(lstm_out)
sig_out = self.fc(lstm_out)
# sigmoid function
#sig_out = self.sig(out)
# reshape into (batch_size, seq_length, output_size)
sig_out = sig_out.view(batch_size, -1, self.output_size)
# get last batch
sig_out = sig_out[:, -1]
# return last sigmoid output and hidden state
return sig_out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# move data to GPU, if available
if(train_on_gpu):
rnn.cuda()
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
h = tuple([each.data for each in hidden])
rnn.zero_grad()
output, h = rnn(inp, h)
loss = criterion(output, target)
loss.backward()
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), h
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 8 # of words in a sequence
# Batch Size
batch_size = 250
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = len(vocab_to_int)
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 350
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 5.326384628295899
Epoch: 1/10 Loss: 4.710179415225983
Epoch: 1/10 Loss: 4.48501496553421
Epoch: 1/10 Loss: 4.375867362499237
Epoch: 1/10 Loss: 4.282036849498748
Epoch: 1/10 Loss: 4.2112381291389465
Epoch: 1/10 Loss: 4.1846558079719545
Epoch: 2/10 Loss: 4.036768052359702
Epoch: 2/10 Loss: 3.993407000541687
Epoch: 2/10 Loss: 3.9557144689559935
Epoch: 2/10 Loss: 3.948258867740631
Epoch: 2/10 Loss: 3.9429902210235594
Epoch: 2/10 Loss: 3.915199726104736
Epoch: 2/10 Loss: 3.9019778184890748
Epoch: 3/10 Loss: 3.7550781675627536
Epoch: 3/10 Loss: 3.743118405342102
Epoch: 3/10 Loss: 3.772823224544525
Epoch: 3/10 Loss: 3.748188156604767
Epoch: 3/10 Loss: 3.7356738142967223
Epoch: 3/10 Loss: 3.731651346206665
Epoch: 3/10 Loss: 3.7295025300979616
Epoch: 4/10 Loss: 3.6025442728694057
Epoch: 4/10 Loss: 3.601614813327789
Epoch: 4/10 Loss: 3.593716688632965
Epoch: 4/10 Loss: 3.5937984857559204
Epoch: 4/10 Loss: 3.5985618085861204
Epoch: 4/10 Loss: 3.611578746318817
Epoch: 4/10 Loss: 3.59236829662323
Epoch: 5/10 Loss: 3.4775612299710934
Epoch: 5/10 Loss: 3.4557712960243223
Epoch: 5/10 Loss: 3.4928574204444884
Epoch: 5/10 Loss: 3.481609657287598
Epoch: 5/10 Loss: 3.4705481195449828
Epoch: 5/10 Loss: 3.4972275295257567
Epoch: 5/10 Loss: 3.5121942739486696
Epoch: 6/10 Loss: 3.3825889275947087
Epoch: 6/10 Loss: 3.3621983828544617
Epoch: 6/10 Loss: 3.376428952693939
Epoch: 6/10 Loss: 3.3920288395881655
Epoch: 6/10 Loss: 3.397782608509064
Epoch: 6/10 Loss: 3.40949845457077
Epoch: 6/10 Loss: 3.4246185708045958
Epoch: 7/10 Loss: 3.2958977587626013
Epoch: 7/10 Loss: 3.288792898654938
Epoch: 7/10 Loss: 3.313106318473816
Epoch: 7/10 Loss: 3.3204531650543214
Epoch: 7/10 Loss: 3.3245518345832825
Epoch: 7/10 Loss: 3.340586449146271
Epoch: 7/10 Loss: 3.343881350040436
Epoch: 8/10 Loss: 3.2102109496022613
Epoch: 8/10 Loss: 3.2323768420219423
Epoch: 8/10 Loss: 3.2285256972312926
Epoch: 8/10 Loss: 3.2577172226905824
Epoch: 8/10 Loss: 3.2605974197387697
Epoch: 8/10 Loss: 3.286794167518616
Epoch: 8/10 Loss: 3.2879000945091246
Epoch: 9/10 Loss: 3.165316897378841
Epoch: 9/10 Loss: 3.165338409900665
Epoch: 9/10 Loss: 3.2026779408454895
Epoch: 9/10 Loss: 3.199467888355255
Epoch: 9/10 Loss: 3.206555487155914
Epoch: 9/10 Loss: 3.218054090499878
Epoch: 9/10 Loss: 3.2293826608657836
Epoch: 10/10 Loss: 3.1125497591327616
Epoch: 10/10 Loss: 3.124452010154724
Epoch: 10/10 Loss: 3.1387273473739623
Epoch: 10/10 Loss: 3.1481817121505737
Epoch: 10/10 Loss: 3.172391725540161
Epoch: 10/10 Loss: 3.178947241306305
Epoch: 10/10 Loss: 3.183301763534546
Model Trained and Saved
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) After several tests I found the combination of hyperparameters that give me good results.For example: n_layers equals 2 and the hidden_dim equals 350. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
jerry: freeze in the air.
elaine: what about you?
george: what do you mean.
elaine: i can't believe this.(jerry opens his head 'no')
kramer:(to jerry) oh, hi, hi.
jerry: hello.
jerry: hi, i know that this is a little rocky. you know, it's not fair, but i was wondering...
jerry:(to kramer) what do you say?
george:(still laughing) you know what you do? you don't want me to get it fixed.
george: what do you mean......... entenmann's.(to george)
george: hey, hey, hey, what are you doing here?
elaine: i think it's a great solution.
george:(to jerry) oh, my god...
jerry: hey, i got it.
kramer:(to the man) what is this?
jerry: oh, i think you can.
jerry:(to george) you know, you know, they have no idea how to do this?
jerry: no.
george:(to the phone) hello, jerry.
jerry: hey.
kramer: hey! jerry!
jerry:(to kramer) you see, this is a great idea...
jerry: i don't know.
elaine:(pointing at the door) hey, what's the problem? you know, i think it's not that bad.
kramer: oh! i forgot to be a little nervous now, you don't know what i did. you know you got any shredded coconut?
jerry: no no, it's too tight.
george: what do you mean?
jerry: well, i just got a little steam on the street.
jerry: you don't know what you want.
jerry: oh, yeah.
jerry: yeah, i don't know.
george: oh
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
The TV Script is Not PerfectIt's ok if the TV script doesn't make perfect sense. It should look like alternating lines of dialogue, here is one such example of a few generated lines. Example generated script>jerry: what about me?>>jerry: i don't have to wait.>>kramer:(to the sales table)>>elaine:(to jerry) hey, look at this, i'm a good doctor.>>newman:(to elaine) you think i have no idea of this...>>elaine: oh, you better take the phone, and he was a little nervous.>>kramer:(to the phone) hey, hey, jerry, i don't want to be a little bit.(to kramer and jerry) you can't.>>jerry: oh, yeah. i don't even know, i know.>>jerry:(to the phone) oh, i know.>>kramer:(laughing) you know...(to jerry) you don't know.You can see that there are multiple characters that say (somewhat) complete sentences, but it doesn't have to be perfect! It takes quite a while to get good results, and often, you'll have to use a smaller vocabulary (and discard uncommon words), or get more data. The Seinfeld dataset is about 3.4 MB, which is big enough for our purposes; for script generation you'll want more than 1 MB of text, generally. Submitting This ProjectWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_tv_script_generation.ipynb" and save another copy as an HTML file by clicking "File" -> "Download as.."->"html". Include the "helper.py" and "problem_unittests.py" files in your submission. Once you download these files, compress them into one zip file for submission.
###Code
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_counts = Counter(text)
# sorting the words from most to least frequent in text occurrence
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
# create int_to_vocab dictionaries
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
token_dict = {
'.':' ||PERIOD|| ',
',':' ||COMMA|| ',
'"':' ||QUOTATION_MARK|| ',
';':' ||SEMICOLON|| ',
'!':' ||EXCLAMATION_MARK|| ',
'?':' ||QUESTION_MARK|| ',
'(':' ||LEFT_PAREN|| ',
')':' ||RIGHT_PAREN|| ',
'-': ' ||DASH|| ',
'?': ' ||QUESTION_MARK|| ',
'\n': ' ||RETURN||' ,
':': ' ||COLON|| '
}
print(token_dict)
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token_dict = {
'.':'||PERIOD||',
',':'||COMMA||',
'"':'||QUOTATION_MARK||',
';':'||SEMICOLON||',
'!':'||EXCLAMATION_MARK||',
'?':'||QUESTION_MARK||',
'(':'||LEFT_PAREN||',
')':'||RIGHT_PAREN||',
'-': '||DASH||',
'?': '||QUESTION_MARK||',
'\n': '||RETURN||' ,
}
return token_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
n_batches = int(len(words) / (batch_size))
## TODO: Keep only enough characters to make full batches
words = words[:n_batches * batch_size]
feature_tensors = []
target_tensors = []
for n in range(len(words) - sequence_length):
target_tensors.append(words[n+sequence_length])
feature_tensors.append(words[n:n+sequence_length])
feature_tensors = torch.from_numpy(np.array(feature_tensors))
target_tensors = torch.from_numpy(np.array(target_tensors))
data = TensorDataset(feature_tensors, target_tensors)
data_loader = torch.utils.data.DataLoader(data,
batch_size=batch_size)
# return a dataloader
return data_loader
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]])
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.vocab_size = vocab_size
self.output_size = output_size
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.n_layers = n_layers
self.dropout = dropout
# define model layers
self.embedding = nn.Embedding(self.vocab_size, self.embedding_dim)
self.lstm = nn.LSTM(self.embedding_dim, self.hidden_dim, self.n_layers,
dropout=self.dropout, batch_first=True)
#self.dropout = nn.Dropout(self.dropout)
self.fc = nn.Linear(self.hidden_dim, self.output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# return one batch of output word scores and the hidden state
batch_size = nn_input.size(0)
# embeddings and lstm_out
emb = self.embedding(nn_input)
r_output, hidden = self.lstm(emb, hidden)
#lstm_output = self.dropout(r_output)
lstm_output = r_output.contiguous().view(-1, self.hidden_dim)
output = self.fc(lstm_output)
# reshape into (batch_size, seq_length, output_size)
output = output.view(batch_size, -1, self.output_size)
# get last batch
out = output[:, -1]
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if torch.cuda.is_available():
rnn.cuda()
inp, target = inp.cuda(), target.cuda()
#forward
h = tuple([each.data for each in hidden])
rnn.zero_grad()
out, h = rnn(inp, h)
# perform backpropagation and optimization
loss = criterion(out, target)
loss.backward()
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), h
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Batch {:.2f} Epoch: {:>4}/{:<4} Loss: {}\n'.format(
batch_i/n_batches, epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 15 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 8
# Learning Rate
learning_rate = 0.0005
# Model parameters
# Vocab size
vocab_size = len(int_to_vocab)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = int(0.008*vocab_size)
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 8 epoch(s)...
Batch 0.07 Epoch: 1/8 Loss: 5.682700274944305
Batch 0.14 Epoch: 1/8 Loss: 5.058973133087158
Batch 0.22 Epoch: 1/8 Loss: 4.850149758815766
Batch 0.29 Epoch: 1/8 Loss: 4.733901435852051
Batch 0.36 Epoch: 1/8 Loss: 4.702723975658417
Batch 0.43 Epoch: 1/8 Loss: 4.7129764575958255
Batch 0.50 Epoch: 1/8 Loss: 4.604222703456879
Batch 0.57 Epoch: 1/8 Loss: 4.471813734531403
Batch 0.65 Epoch: 1/8 Loss: 4.434745234489441
Batch 0.72 Epoch: 1/8 Loss: 4.370398196220398
Batch 0.79 Epoch: 1/8 Loss: 4.468252071857452
Batch 0.86 Epoch: 1/8 Loss: 4.490648648262024
Batch 0.93 Epoch: 1/8 Loss: 4.4951714701652525
Batch 0.07 Epoch: 2/8 Loss: 4.295172506127476
Batch 0.14 Epoch: 2/8 Loss: 4.135700309276581
Batch 0.22 Epoch: 2/8 Loss: 4.057426620483398
Batch 0.29 Epoch: 2/8 Loss: 4.015098742961883
Batch 0.36 Epoch: 2/8 Loss: 4.042482703208924
Batch 0.43 Epoch: 2/8 Loss: 4.1209108972549435
Batch 0.50 Epoch: 2/8 Loss: 4.06078724861145
Batch 0.57 Epoch: 2/8 Loss: 3.9613243341445923
Batch 0.65 Epoch: 2/8 Loss: 3.9673962030410768
Batch 0.72 Epoch: 2/8 Loss: 3.899313010215759
Batch 0.79 Epoch: 2/8 Loss: 4.013478213787079
Batch 0.86 Epoch: 2/8 Loss: 4.05616946220398
Batch 0.93 Epoch: 2/8 Loss: 4.057707423686981
Batch 0.07 Epoch: 3/8 Loss: 3.9509440835349814
Batch 0.14 Epoch: 3/8 Loss: 3.871569080352783
Batch 0.22 Epoch: 3/8 Loss: 3.8039170055389406
Batch 0.29 Epoch: 3/8 Loss: 3.7665256028175356
Batch 0.36 Epoch: 3/8 Loss: 3.790395233154297
Batch 0.43 Epoch: 3/8 Loss: 3.8901807675361635
Batch 0.50 Epoch: 3/8 Loss: 3.82191620016098
Batch 0.57 Epoch: 3/8 Loss: 3.7418594846725464
Batch 0.65 Epoch: 3/8 Loss: 3.751534254550934
Batch 0.72 Epoch: 3/8 Loss: 3.7004877047538756
Batch 0.79 Epoch: 3/8 Loss: 3.815841853618622
Batch 0.86 Epoch: 3/8 Loss: 3.8431962118148806
Batch 0.93 Epoch: 3/8 Loss: 3.844757134437561
Batch 0.07 Epoch: 4/8 Loss: 3.7580897079026405
Batch 0.14 Epoch: 4/8 Loss: 3.695622871875763
Batch 0.22 Epoch: 4/8 Loss: 3.636689314365387
Batch 0.29 Epoch: 4/8 Loss: 3.608045282840729
Batch 0.36 Epoch: 4/8 Loss: 3.623652003288269
Batch 0.43 Epoch: 4/8 Loss: 3.7279516134262085
Batch 0.50 Epoch: 4/8 Loss: 3.6765551443099977
Batch 0.57 Epoch: 4/8 Loss: 3.599049596786499
Batch 0.65 Epoch: 4/8 Loss: 3.6003038249015806
Batch 0.72 Epoch: 4/8 Loss: 3.5561565527915953
Batch 0.79 Epoch: 4/8 Loss: 3.674867242336273
Batch 0.86 Epoch: 4/8 Loss: 3.7052609601020814
Batch 0.93 Epoch: 4/8 Loss: 3.696700491428375
Batch 0.07 Epoch: 5/8 Loss: 3.6283368502766633
Batch 0.14 Epoch: 5/8 Loss: 3.5787184166908266
Batch 0.22 Epoch: 5/8 Loss: 3.5270210223197935
Batch 0.29 Epoch: 5/8 Loss: 3.5010189909935
Batch 0.36 Epoch: 5/8 Loss: 3.5141894998550414
Batch 0.43 Epoch: 5/8 Loss: 3.6115041971206665
Batch 0.50 Epoch: 5/8 Loss: 3.5715427603721617
Batch 0.57 Epoch: 5/8 Loss: 3.499521305561066
Batch 0.65 Epoch: 5/8 Loss: 3.494609861373901
Batch 0.72 Epoch: 5/8 Loss: 3.4636199111938475
Batch 0.79 Epoch: 5/8 Loss: 3.5835597167015076
Batch 0.86 Epoch: 5/8 Loss: 3.5997207770347597
Batch 0.93 Epoch: 5/8 Loss: 3.5836065158843993
Batch 0.07 Epoch: 6/8 Loss: 3.527492625407936
Batch 0.14 Epoch: 6/8 Loss: 3.4843383736610414
Batch 0.22 Epoch: 6/8 Loss: 3.4333666763305666
Batch 0.29 Epoch: 6/8 Loss: 3.4085198526382445
Batch 0.36 Epoch: 6/8 Loss: 3.4364068808555603
Batch 0.43 Epoch: 6/8 Loss: 3.5331649260520934
Batch 0.50 Epoch: 6/8 Loss: 3.5043571348190308
Batch 0.57 Epoch: 6/8 Loss: 3.4188109192848204
Batch 0.65 Epoch: 6/8 Loss: 3.4152122125625612
Batch 0.72 Epoch: 6/8 Loss: 3.3890241742134095
Batch 0.79 Epoch: 6/8 Loss: 3.5019761128425597
Batch 0.86 Epoch: 6/8 Loss: 3.51499818611145
Batch 0.93 Epoch: 6/8 Loss: 3.4990782370567324
Batch 0.07 Epoch: 7/8 Loss: 3.456701330409562
Batch 0.14 Epoch: 7/8 Loss: 3.415873683452606
Batch 0.22 Epoch: 7/8 Loss: 3.3695875201225283
Batch 0.29 Epoch: 7/8 Loss: 3.3444726376533507
Batch 0.36 Epoch: 7/8 Loss: 3.362831311225891
Batch 0.43 Epoch: 7/8 Loss: 3.4602575716972352
Batch 0.50 Epoch: 7/8 Loss: 3.4221904721260072
Batch 0.57 Epoch: 7/8 Loss: 3.3533827748298646
Batch 0.65 Epoch: 7/8 Loss: 3.3491762113571166
Batch 0.72 Epoch: 7/8 Loss: 3.3218634281158446
Batch 0.79 Epoch: 7/8 Loss: 3.436838529109955
Batch 0.86 Epoch: 7/8 Loss: 3.443614191532135
Batch 0.93 Epoch: 7/8 Loss: 3.426708779335022
Batch 0.07 Epoch: 8/8 Loss: 3.4014489128569925
Batch 0.14 Epoch: 8/8 Loss: 3.3592702884674073
Batch 0.22 Epoch: 8/8 Loss: 3.3197010645866394
Batch 0.29 Epoch: 8/8 Loss: 3.283359657764435
Batch 0.36 Epoch: 8/8 Loss: 3.302696551322937
Batch 0.43 Epoch: 8/8 Loss: 3.3986005721092223
Batch 0.50 Epoch: 8/8 Loss: 3.357003251075745
Batch 0.57 Epoch: 8/8 Loss: 3.3030333037376405
Batch 0.65 Epoch: 8/8 Loss: 3.295781925678253
Batch 0.72 Epoch: 8/8 Loss: 3.273532060146332
Batch 0.79 Epoch: 8/8 Loss: 3.373016785144806
Batch 0.86 Epoch: 8/8 Loss: 3.388262035369873
Batch 0.93 Epoch: 8/8 Loss: 3.372592930316925
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** I've used the recommended values from the classes as a starting point. After that, I've tested different values to find the optimal model --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:49: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
_____no_output_____
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
# return tuple
return (None, None)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
_____no_output_____
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
_____no_output_____
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# return a dataloader
return None
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
_____no_output_____
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
# define model layers
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# return one batch of output word scores and the hidden state
return None, None
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
_____no_output_____
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
# perform backpropagation and optimization
# return the loss over a batch and the hidden state produced by our model
return None, None
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
_____no_output_____
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = # of words in a sequence
# Batch Size
batch_size =
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs =
# Learning Rate
learning_rate =
# Model parameters
# Vocab size
vocab_size =
# Output size
output_size =
# Embedding Dimension
embedding_dim =
# Hidden Dimension
hidden_dim =
# Number of RNN Layers
n_layers =
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
_____no_output_____
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
_____no_output_____
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
# 1. int_to_vocab, which maps integers to characters
# 2. vocab_to_int, which maps characters to unique integers
int_to_vocab = {}
vocab_to_int = {}
unique_chars = tuple(set(text))
for index, vocab in enumerate(unique_chars):
int_to_vocab[index] = vocab
vocab_to_int[vocab] = index
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
tokens = {
'.' : '||period||',
',' : '||comma||',
'"' : '||quotation_mark||',
';' : '||semicolon||',
'!' : '||exclamation_mark||',
'?' : '||question_mark||',
'(' : '||left_parentheses||',
')' : '||right_parentheses||',
'-' : '||dash||',
'\n': '||return||'
}
return tokens
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
import numpy as np
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
words_len = len(words)
rows = words_len - sequence_length
feature_tensors = np.zeros((rows, sequence_length), dtype=np.int64)
target_tensors = np.zeros(rows, dtype=np.int64)
for i in range(0, rows):
feature_tensors[i] = words[i:i+sequence_length]
target_tensors[i] = words[i+sequence_length]
data = TensorDataset(torch.from_numpy(feature_tensors), torch.from_numpy(target_tensors))
data_loader = DataLoader(data, batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
words = [0, 1, 2, 3, 4, 5, 6]
sequence_length = 4
batch_size = 2
data_loader = batch_data(words, sequence_length, batch_size)
iterator = iter(data_loader)
for i, batch in enumerate(iterator):
print(f"batch{i} -> {batch}")
###Output
batch0 -> [tensor([[ 0, 1, 2, 3],
[ 1, 2, 3, 4]]), tensor([ 4, 5])]
batch1 -> [tensor([[ 2, 3, 4, 5]]), tensor([ 6])]
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]])
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.hidden_dim = hidden_dim
self.n_layers = n_layers
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
nn_input = nn_input.to(torch.long)
embeds = self.embedding(nn_input)
# Get the outputs and the new hidden state from the lstm
lstm_output, hidden = self.lstm(embeds, hidden)
output = lstm_output.contiguous().view(-1, self.hidden_dim)
# put output through the fully-connected layer
output = self.fc(output)
batch_size = nn_input.size(0)
output = output.view(batch_size, -1, self.output_size)
out = output[:, -1]
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if train_on_gpu:
inp, target = inp.cuda(), target.cuda()
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
hidden = tuple([each.data for each in hidden])
# perform backpropagation and optimization
# zero accumulated gradients
rnn.zero_grad()
output, hidden = rnn(inp, hidden)
loss = criterion(output, target)
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(int_to_vocab)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 150
# Hidden Dimension
hidden_dim = 512
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train():
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
from workspace_utils import active_session
with active_session():
train()
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 5.571711266994476
Epoch: 1/10 Loss: 4.878271849155426
Epoch: 1/10 Loss: 4.618291933059693
Epoch: 1/10 Loss: 4.4807379689216615
Epoch: 1/10 Loss: 4.477676140785217
Epoch: 1/10 Loss: 4.513315704345703
Epoch: 1/10 Loss: 4.412488713741302
Epoch: 1/10 Loss: 4.284065252304077
Epoch: 1/10 Loss: 4.259161598205567
Epoch: 1/10 Loss: 4.195514317035675
Epoch: 1/10 Loss: 4.312280205249786
Epoch: 1/10 Loss: 4.346163143157959
Epoch: 1/10 Loss: 4.338524021625519
Epoch: 2/10 Loss: 4.1319010958952065
Epoch: 2/10 Loss: 3.9517530937194825
Epoch: 2/10 Loss: 3.842372539997101
Epoch: 2/10 Loss: 3.800363686084747
Epoch: 2/10 Loss: 3.846530839443207
Epoch: 2/10 Loss: 3.9169021553993226
Epoch: 2/10 Loss: 3.8610221276283263
Epoch: 2/10 Loss: 3.7441554384231566
Epoch: 2/10 Loss: 3.747466440677643
Epoch: 2/10 Loss: 3.713126070022583
Epoch: 2/10 Loss: 3.823201638221741
Epoch: 2/10 Loss: 3.8515528354644775
Epoch: 2/10 Loss: 3.847811044692993
Epoch: 3/10 Loss: 3.7869828355078603
Epoch: 3/10 Loss: 3.669773824214935
Epoch: 3/10 Loss: 3.561634219169617
Epoch: 3/10 Loss: 3.539660086631775
Epoch: 3/10 Loss: 3.5690653014183042
Epoch: 3/10 Loss: 3.6554685406684877
Epoch: 3/10 Loss: 3.598834022521973
Epoch: 3/10 Loss: 3.4788627371788023
Epoch: 3/10 Loss: 3.502801795959473
Epoch: 3/10 Loss: 3.4784091477394106
Epoch: 3/10 Loss: 3.586312406539917
Epoch: 3/10 Loss: 3.6300196738243105
Epoch: 3/10 Loss: 3.587691864967346
Epoch: 4/10 Loss: 3.540191072428559
Epoch: 4/10 Loss: 3.478100221157074
Epoch: 4/10 Loss: 3.3818204498291013
Epoch: 4/10 Loss: 3.3707795600891113
Epoch: 4/10 Loss: 3.384642153263092
Epoch: 4/10 Loss: 3.4753421969413756
Epoch: 4/10 Loss: 3.411461368083954
Epoch: 4/10 Loss: 3.3204568371772765
Epoch: 4/10 Loss: 3.3195564813613894
Epoch: 4/10 Loss: 3.3186293630599977
Epoch: 4/10 Loss: 3.4161300625801085
Epoch: 4/10 Loss: 3.4449900736808776
Epoch: 4/10 Loss: 3.410990931034088
Epoch: 5/10 Loss: 3.39235136703318
Epoch: 5/10 Loss: 3.340482678413391
Epoch: 5/10 Loss: 3.2444980235099794
Epoch: 5/10 Loss: 3.235839537143707
Epoch: 5/10 Loss: 3.244984624862671
Epoch: 5/10 Loss: 3.338359615802765
Epoch: 5/10 Loss: 3.2708742098808288
Epoch: 5/10 Loss: 3.1849301075935363
Epoch: 5/10 Loss: 3.1918037452697754
Epoch: 5/10 Loss: 3.19908083820343
Epoch: 5/10 Loss: 3.2911182861328125
Epoch: 5/10 Loss: 3.2999085855484007
Epoch: 5/10 Loss: 3.280162501335144
Epoch: 6/10 Loss: 3.2784045960511956
Epoch: 6/10 Loss: 3.228626915931702
Epoch: 6/10 Loss: 3.147059064388275
Epoch: 6/10 Loss: 3.1285659017562866
Epoch: 6/10 Loss: 3.1318225393295287
Epoch: 6/10 Loss: 3.22485000705719
Epoch: 6/10 Loss: 3.166000783443451
Epoch: 6/10 Loss: 3.0893503522872923
Epoch: 6/10 Loss: 3.0872444791793825
Epoch: 6/10 Loss: 3.0989111495018005
Epoch: 6/10 Loss: 3.1848491988182066
Epoch: 6/10 Loss: 3.1829391117095946
Epoch: 6/10 Loss: 3.1796915674209596
Epoch: 7/10 Loss: 3.1839484361426136
Epoch: 7/10 Loss: 3.1291361565589906
Epoch: 7/10 Loss: 3.0651475591659545
Epoch: 7/10 Loss: 3.060409274101257
Epoch: 7/10 Loss: 3.053329212665558
Epoch: 7/10 Loss: 3.138073000431061
Epoch: 7/10 Loss: 3.072507423400879
Epoch: 7/10 Loss: 3.0166346545219422
Epoch: 7/10 Loss: 3.00713566160202
Epoch: 7/10 Loss: 3.0260712361335753
Epoch: 7/10 Loss: 3.10122522687912
Epoch: 7/10 Loss: 3.0935050263404844
Epoch: 7/10 Loss: 3.0933089632987976
Epoch: 8/10 Loss: 3.1020189333011245
Epoch: 8/10 Loss: 3.053974709033966
Epoch: 8/10 Loss: 2.989046476840973
Epoch: 8/10 Loss: 2.985050395488739
Epoch: 8/10 Loss: 2.972005692958832
Epoch: 8/10 Loss: 3.0572777276039123
Epoch: 8/10 Loss: 3.0011060910224914
Epoch: 8/10 Loss: 2.947799513339996
Epoch: 8/10 Loss: 2.944150359630585
Epoch: 8/10 Loss: 2.958486298561096
Epoch: 8/10 Loss: 3.026620376586914
Epoch: 8/10 Loss: 3.0191986889839173
Epoch: 8/10 Loss: 3.024890842437744
Epoch: 9/10 Loss: 3.0351819143206713
Epoch: 9/10 Loss: 2.9905002155303957
Epoch: 9/10 Loss: 2.924140061855316
Epoch: 9/10 Loss: 2.9252638804912565
Epoch: 9/10 Loss: 2.915180624961853
Epoch: 9/10 Loss: 2.9896155796051027
Epoch: 9/10 Loss: 2.9380401096343993
Epoch: 9/10 Loss: 2.879092709541321
Epoch: 9/10 Loss: 2.881746258497238
Epoch: 9/10 Loss: 2.8951931734085083
Epoch: 9/10 Loss: 2.958187071800232
Epoch: 9/10 Loss: 2.9620415778160094
Epoch: 10/10 Loss: 2.9731231341051982
Epoch: 10/10 Loss: 2.9325652704238894
Epoch: 10/10 Loss: 2.870407527923584
Epoch: 10/10 Loss: 2.86723651266098
Epoch: 10/10 Loss: 2.850922607421875
Epoch: 10/10 Loss: 2.936414031505585
Epoch: 10/10 Loss: 2.8825682010650633
Epoch: 10/10 Loss: 2.8285043711662294
Epoch: 10/10 Loss: 2.8324423470497133
Epoch: 10/10 Loss: 2.8376891388893126
Epoch: 10/10 Loss: 2.910601372718811
Epoch: 10/10 Loss: 2.8955058484077454
Epoch: 10/10 Loss: 2.911742573261261
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here)- sequence_length = 10, I first tried with sequence length of 20 but I think due to big size it didn't work for me.- batch_size = 128, I used a batch size of 64 first but as I was training on GPU and thus due to enough resources in memory, I increased it to 128 for efficient training.- num_epochs = 10, Initially I tried with epoch of 20 but it was taking too long to train and I was not seeing any decreasing in loss even after 5 epochs. So, I used 5 epochs, but again I didn't see any improvements after finishing training possibly due to not properly tunning other params.- learning_rate = 0.001, I started with 0.01, but it was taking to long to converge. So, used 0.001 and I saw the drastic improvements.- embedding_dim = 150, After reading few posts in udacity student forums, I increased this value from 128 to 150. And it seemed to work.- hidden_dim = 512, I started with 256, but increased it to 512 along with tunning other params to decrease the loss, and this works well.- n_layers = 2, As this should be between 1-3, I chose 2 and this is sufficient enough to train efficiently. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
jerry: rise kessler.
george: what?
jerry: it's a crime.
hoyt: and the milk was correct.
chiles: soup nazi?
elaine: oh, no, not the ones!
hoyt: so you think she's killed?
jerry: i don't know. but it is a conveyance.
elaine: what do you mean?
elaine: well, i was just going into the parking lot of health.
hoyt: and then who was this woman?
elaine: i know.
jerry: i can't believe we're going.
jerry: well, it's the only one who lives in the wheelchair.
elaine: you know, you know, the only ones who has been discussing mortal danger.
hoyt: so what did you do about that 400 lady?
hoyt: i don't know what this means.
hoyt: so, essentially who invaded spain up to?
jerry: what is this about the defendants of darkness. beep?
george: no- one's not.
elaine: well maybe we could get together. you gotta go to paris and die, massachusetts, and then i can prove you were making robbed medium.
jerry: so, you think we could refill a relationship.
hoyt: you know what? i mean, it's a good evening.
jerry: you know, you can call the court. they don't have to create a bystander.
jerry: you want the video?
jerry: i don't think so.
george: oh, yeah.
george: oh! come on, come on, sit down, sit down.
[new witness: moors
guard: soup's a waste hilly. jane, essentially, and, and you know, i would have to get out of here.
jerry: what is your name?
kramer: jackie complaining?
jerry: no no no no. donald. i think it's a good samaritan sandwich.
elaine: you know, the whole wheelchair is, they would have
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
%reload_ext autoreload
%autoreload 2
%matplotlib inline
###Output
_____no_output_____
###Markdown
To allow for long-running processes (i.e. network training) import [workspace_utils](https://github.com/udacity/workspaces-student-support/tree/master/jupyter). Using magic command %load to import into next cell.
###Code
import os
project_dir = "/home/workspace/"
os.path.isfile(project_dir + "workspace_utils.py")
# %load workspace_utils.py
import signal
from contextlib import contextmanager
import requests
DELAY = INTERVAL = 4 * 60 # interval time in seconds
MIN_DELAY = MIN_INTERVAL = 2 * 60
KEEPALIVE_URL = "https://nebula.udacity.com/api/v1/remote/keep-alive"
TOKEN_URL = "http://metadata.google.internal/computeMetadata/v1/instance/attributes/keep_alive_token"
TOKEN_HEADERS = {"Metadata-Flavor":"Google"}
def _request_handler(headers):
def _handler(signum, frame):
requests.request("POST", KEEPALIVE_URL, headers=headers)
return _handler
@contextmanager
def active_session(delay=DELAY, interval=INTERVAL):
"""
Example:
from workspace_utils import active_session
with active_session():
# do long-running work here
"""
token = requests.request("GET", TOKEN_URL, headers=TOKEN_HEADERS).text
headers = {'Authorization': "STAR " + token}
delay = max(delay, MIN_DELAY)
interval = max(interval, MIN_INTERVAL)
original_handler = signal.getsignal(signal.SIGALRM)
try:
signal.signal(signal.SIGALRM, _request_handler(headers))
signal.setitimer(signal.ITIMER_REAL, delay, interval)
yield
finally:
signal.signal(signal.SIGALRM, original_handler)
signal.setitimer(signal.ITIMER_REAL, 0)
def keep_awake(iterable, delay=DELAY, interval=INTERVAL):
"""
Example:
from workspace_utils import keep_awake
for i in keep_awake(range(5)):
# do iteration with lots of work here
"""
with active_session(delay, interval): yield from iterable
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
[Explore](dataExplore) data in more depth, and [export refined version](dataRefine) - **See Appendix, where after data exploration, very short and unintelligible entries are removed**- **Then, data is saved in alternative Seinfeld_Scripts_cleaned.txt** Set up a word2vec lookup so we can use pre-trained weights later- https://medium.com/@martinpella/how-to-use-pre-trained-word-embeddings-in-pytorch-71ca59249f76- using the Glove 6B 300 embeddings set- allow for previously running the code and saving the weights matrix as weights_matrix.pkl, in which case we can bypass this
###Code
weights_matrix = []
glove_path = '../../data/glove6B'
weights_file = 'weights_matrix.pkl'
use_word2vec = False
import os
import numpy as np
import pickle
if os.path.isfile(f'{weights_file}'):
weights_matrix = pickle.load(open(f'weights_matrix.pkl', 'rb'))
use_word2vec = True
use_word2vec
###Output
_____no_output_____
###Markdown
**Next step should not be required as we load the pre-built weights above from a pickle file**
###Code
if not use_word2vec:
glove_file = os.path.join(f'{glove_path}/glove.6B.300d.txt')
use_word2vec = (os.path.isfile(glove_file))
use_word2vec
use_word2vec and len(weights_matrix) > 0
use_word2vec and os.path.isfile(f'{glove_path}/6B.300_words.pkl')
###Output
_____no_output_____
###Markdown
**If running creation of word2vec vectors below bcolz will be required, however we bypass this by loading pre-built weights above**- **Therefore, do not run the next 3 cells...**
###Code
!conda install -c conda-forge bcolz
if use_word2vec and len(weights_matrix) == 0 and not os.path.isfile(f'{glove_path}/6B.300_words.pkl'):
import bcolz
import numpy as np
import pickle
w2v_words = []
idx = 0
w2v_word2idx = {}
w2v_vectors = bcolz.carray(np.zeros(1), rootdir=f'{glove_path}/6B.300.dat', mode='w')
with open(f'{glove_path}/glove.6B.300d.txt', 'rb') as f:
for l in f:
line = l.decode().split()
word = line[0]
w2v_words.append(word)
w2v_word2idx[word] = idx
idx += 1
vect = np.array(line[1:]).astype(np.float)
w2v_vectors.append(vect)
w2v_vectors = bcolz.carray(w2v_vectors[1:].reshape((400000, 300)), rootdir=f'{glove_path}/6B.300.dat', mode='w')
w2v_vectors.flush()
pickle.dump(w2v_words, open(f'{glove_path}/6B.300_words.pkl', 'wb'))
pickle.dump(w2v_word2idx, open(f'{glove_path}/6B.300_idx.pkl', 'wb'))
if use_word2vec and len(weights_matrix) == 0 :
import bcolz
import pickle
w2v_vectors = bcolz.open(f'{glove_path}/6B.300.dat')[:]
w2v_words = pickle.load(open(f'{glove_path}/6B.300_words.pkl', 'rb'))
w2v_word2idx = pickle.load(open(f'{glove_path}/6B.300_idx.pkl', 'rb'))
glove = {w: w2v_vectors[w2v_word2idx[w]] for w in w2v_words}
###Output
_____no_output_____
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
from collections import Counter
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
if type(text) == str:
text = text.split()
word_counts = Counter(text)
# sorting the words from most to least frequent in text occurrence
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
# create int_to_vocab dictionaries
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token_dict = {
'.' : '||period||',
',' : '||comma||',
'"' : '||quotation_mark||',
';' : '||semi_colon||',
'!' : '||exclamation_mark||',
'?' : '||question_mark||',
'(' : '||left_parentheses||',
')' : '||right_parentheses||',
'-' : '||dash||',
'\n' : '||return||'
}
return token_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
# Run this cell only if you are using Google colab
!rm -r data/
!git clone https://github.com/ahmedmbakr/deep-learning-v2-pytorch/
!mv deep-learning-v2-pytorch/project-tv-script-generation/* .
!rm -rf deep-learning-v2-pytorch/
!rm dlnd_tv_script_generation.ipynb
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
counts = Counter(text) # counts is a dictionary, where the key is a word and its value is its number of occurrences
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: idx for idx, word in enumerate(vocab)}
int_to_vocab = {idx: word for idx, word in enumerate(vocab)}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
dict = {'.': '||period||',
',': '||comma||',
'"': '||quotation_mark||',
';': '||semicolon||',
'!': '||exclamation_mark||',
'?': '||question_mark||',
'(': '||left_parentheses||',
')': '||right_parentheses||',
'-': '||dash||',
'\n': '||return||'}
return dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
import torch
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
number_of_batches = len(words)//batch_size
words = words[:number_of_batches*batch_size]
features_vec = []
labels = []
for i in range(len(words) - sequence_length):
features_vec.append(words[i: i+ sequence_length])
labels.append(words[i + sequence_length])
features_vec = np.array(features_vec)
labels = np.array(labels)
train_data = TensorDataset(torch.from_numpy(features_vec), torch.from_numpy(labels))
train_loader = DataLoader(train_data, shuffle=True, batch_size=batch_size)
# return a dataloader
return train_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
train_loader = batch_data([1,2,3,4,5,6,7], 4, 2)
for train_features, train_labels in train_loader:
print("Features:")
print("\tSize: ", train_features.shape)
print("\tData: ", train_features)
print("Labels:")
print("\tSize: ", train_labels.shape)
print("\tData: ", train_labels)
###Output
Features:
Size: torch.Size([2, 4])
Data: tensor([[1, 2, 3, 4],
[2, 3, 4, 5]])
Labels:
Size: torch.Size([2])
Data: tensor([5, 6])
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[12, 13, 14, 15, 16],
[34, 35, 36, 37, 38],
[29, 30, 31, 32, 33],
[ 8, 9, 10, 11, 12],
[ 2, 3, 4, 5, 6],
[42, 43, 44, 45, 46],
[ 4, 5, 6, 7, 8],
[27, 28, 29, 30, 31],
[23, 24, 25, 26, 27],
[20, 21, 22, 23, 24]])
torch.Size([10])
tensor([17, 39, 34, 13, 7, 47, 9, 32, 28, 25])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
self.output_size = output_size
# define all layers
self.embedding = nn.Embedding(num_embeddings=vocab_size, embedding_dim=embedding_dim)
self.lstm = nn.LSTM(input_size=embedding_dim, hidden_size=hidden_dim, batch_first=True, num_layers=n_layers, dropout=dropout)
self.fc = nn.Linear(in_features=hidden_dim, out_features=output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
embeddings = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeddings, hidden) # lstm_output size is (batch_size, seq_length, hidden_dim)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim) # size is (batch_size * seq_length, hidden_dim)
output = self.fc(lstm_out) # size is (batch_size*seq_length, output_size)
output = output.view(batch_size, -1, self.output_size)
# return one batch of output word scores and the hidden state
return output[:,-1,:], hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden, clip=5):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if train_on_gpu:
inp = inp.cuda()
target = target.cuda()
rnn.cuda()
hidden = tuple([each.data for each in hidden])
# perform backpropagation and optimization
# zero accumulated gradients
rnn.zero_grad()
# get the output from the model
output, h = rnn(inp, hidden)
# calculate the loss and perform backprop
loss = criterion(output.squeeze(), target)
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(rnn.parameters(), clip)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 500
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 20
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(int_to_vocab)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 400
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 20 epoch(s)...
Epoch: 1/20 Loss: 5.328217841148376
Epoch: 1/20 Loss: 4.579354710578919
Epoch: 1/20 Loss: 4.368084780216217
Epoch: 2/20 Loss: 4.181097328738995
Epoch: 2/20 Loss: 4.075264681339264
Epoch: 2/20 Loss: 4.030428939819336
Epoch: 3/20 Loss: 3.9267739162591226
Epoch: 3/20 Loss: 3.8696484966278075
Epoch: 3/20 Loss: 3.852487372875214
Epoch: 4/20 Loss: 3.7827496747860963
Epoch: 4/20 Loss: 3.7328899631500243
Epoch: 4/20 Loss: 3.7185927472114564
Epoch: 5/20 Loss: 3.6586176642665156
Epoch: 5/20 Loss: 3.6243782715797423
Epoch: 5/20 Loss: 3.6367754826545715
Epoch: 6/20 Loss: 3.5592333095131554
Epoch: 6/20 Loss: 3.536521517276764
Epoch: 6/20 Loss: 3.546740944862366
Epoch: 7/20 Loss: 3.4769521469052878
Epoch: 7/20 Loss: 3.467031841278076
Epoch: 7/20 Loss: 3.479037853717804
Epoch: 8/20 Loss: 3.4126599039002246
Epoch: 8/20 Loss: 3.395670522212982
Epoch: 8/20 Loss: 3.4113064551353456
Epoch: 9/20 Loss: 3.35177768966704
Epoch: 9/20 Loss: 3.3366590361595154
Epoch: 9/20 Loss: 3.3500011506080627
Epoch: 10/20 Loss: 3.288271298962687
Epoch: 10/20 Loss: 3.2833918738365173
Epoch: 10/20 Loss: 3.3056934962272644
Epoch: 11/20 Loss: 3.2440229823306144
Epoch: 11/20 Loss: 3.2437193269729616
Epoch: 11/20 Loss: 3.2575005469322202
Epoch: 12/20 Loss: 3.1957891112238666
Epoch: 12/20 Loss: 3.196294835090637
Epoch: 12/20 Loss: 3.221501992225647
Epoch: 13/20 Loss: 3.167061459515743
Epoch: 13/20 Loss: 3.153742591381073
Epoch: 13/20 Loss: 3.1755625610351563
Epoch: 14/20 Loss: 3.126175165785379
Epoch: 14/20 Loss: 3.1193177909851073
Epoch: 14/20 Loss: 3.1494939904212953
Epoch: 15/20 Loss: 3.0894218505113975
Epoch: 15/20 Loss: 3.093368791103363
Epoch: 15/20 Loss: 3.1114408712387087
Epoch: 16/20 Loss: 3.0625199565181025
Epoch: 16/20 Loss: 3.0643949279785154
Epoch: 16/20 Loss: 3.0867559518814085
Epoch: 17/20 Loss: 3.0399409098643453
Epoch: 17/20 Loss: 3.027000280857086
Epoch: 17/20 Loss: 3.060256417751312
Epoch: 18/20 Loss: 3.008726236007223
Epoch: 18/20 Loss: 3.013799873828888
Epoch: 18/20 Loss: 3.0325880279541018
Epoch: 19/20 Loss: 2.988782970080126
Epoch: 19/20 Loss: 2.98451042509079
Epoch: 19/20 Loss: 3.008022735595703
Epoch: 20/20 Loss: 2.958576929828064
Epoch: 20/20 Loss: 2.959776393890381
Epoch: 20/20 Loss: 2.988870225906372
Model Trained and Saved
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here)- Sequence length: I tries sequence lengths between 10 and 20, with 10 being the best option, as the average sentence length is 5.6 words- Learning Rate: Tried learning rate 0.01, but it converges at 3.6 for more than 7 epochs and the learning rate 0.001 acheives better results as shown in the notebook results.- Num Epochs: After training for a while, I found that 20 epochs is a resonable number that achieves 2.9 in loss, which is good for passing the project. Furthermore, the network started to converge for more than 4 epochs --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
jerry: is all about the other line?
jerry: no.
george:(confused) i don't know... you have to be in a slump?
jerry: yeah, yeah. i got it in the building. i don't know what they do.
kramer: oh, i know. i mean, it's just a little concerned.
jerry:(to george, rewoman) oh, you know what?
elaine: what?
jerry: i know it's like a sunny.
george: i don't think i should go for the rest of your life.
jerry: oh. you know, it's not really much.(jerry walks into his tracks)
kramer: well you gotta go.
jerry: i don't know....
george:(to jerry) hey, you got a problem with the swirl?
kramer: no!
jerry: you can't go to the bathroom.
george: i don't want to talk.
jerry:(to kramer) hey, i just wanted to talk, i don't know what i am.
jerry: i mean, we were in my building, you have to be there, and now, i'll tell you what, chubs.(to elaine) i got some cardboard on the outs. i got a new recliner of water.(to elaine) you know, i can't believe that... you got a little lysol on it.
george: i think i could.
jerry: what do you want to say when you were making a lot of women in danbury.
jerry: i mean what about the show?
george: what do you mean?
jerry: i think i can get it out.
jerry:(smiling) well, you don't want it!
george:(to kramer) i don't know, i can't take my money back.
kramer:(to kramer) hey, what am i supposed to do with this thing?
george: what?
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_unique = list(set(text))
vocab_to_int = {word: ii for ii, word in enumerate(word_unique)}
int_to_vocab = {ii: word for word, ii in vocab_to_int.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token = {
'.': '||Period||',
',': '||Comma||',
'"': '||Quotation_Mark||',
';': '||Semicolon||',
'!': '||Exclamation_Mark||',
'?': '||Question_Mark||',
'(': '||Left_Parentheses||',
')': '||Rigth_Paranthesis||',
'-': '||Dash||',
'\n': '||Return||',
}
return token
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
number = len(words) - sequence_length
train_x = np.zeros((number, sequence_length))
train_y = np.zeros(number)
for idx in range(0, len(words) - sequence_length, 1):
train_x[idx] = words[idx:idx+sequence_length]
train_y[idx] = words[idx+sequence_length]
# return a dataloader
train_x, train_y = train_x.astype(np.int), train_y.astype(np.int)
data = TensorDataset(torch.from_numpy(train_x), torch.from_numpy(train_y))
data_loader = DataLoader(data, batch_size=batch_size, shuffle=True)
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
loader = batch_data(int_text, 10, 5)
loader_iter = iter(loader)
x, y = loader_iter.next()
print(x)
print(y)
###Output
tensor([[10915, 6191, 12842, 10727, 11216, 12070, 14734, 4585, 17886, 9921],
[ 5777, 7971, 4534, 2699, 5901, 7288, 9921, 20661, 20661, 21264],
[21264, 3382, 17526, 13787, 17526, 13787, 14752, 9921, 13787, 14752],
[20661, 20661, 2629, 14089, 13221, 5543, 2729, 15491, 17950, 13959],
[ 4534, 20948, 13632, 2949, 17526, 2949, 2224, 9921, 9921, 9921]],
dtype=torch.int32)
tensor([20661, 8211, 9921, 643, 9921], dtype=torch.int32)
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[24, 25, 26, 27, 28],
[21, 22, 23, 24, 25],
[35, 36, 37, 38, 39],
[30, 31, 32, 33, 34],
[13, 14, 15, 16, 17],
[32, 33, 34, 35, 36],
[10, 11, 12, 13, 14],
[ 0, 1, 2, 3, 4],
[ 6, 7, 8, 9, 10],
[40, 41, 42, 43, 44]], dtype=torch.int32)
torch.Size([10])
tensor([29, 26, 40, 35, 18, 37, 15, 5, 11, 45], dtype=torch.int32)
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.vocab_size = vocab_size
self.output_size = output_size
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.n_layers = n_layers
# define model layers
self.embed = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
self.dropout = nn.Dropout(dropout)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
embed_out = self.embed(nn_input.long())
lstm_out, hidden = self.lstm(embed_out, hidden)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
output = self.dropout(lstm_out)
output = self.fc(output)
output = output.view(batch_size, -1, self.output_size)
out = output[:, -1]
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if(train_on_gpu):
inp, target = inp.cuda(), target.cuda()
# 这条命令把之前的hidden required_grads设置为否,等于是只在乎当前的batch
hidden = tuple([each.data for each in hidden])
# perform backpropagation and optimization
rnn.zero_grad()
output, hidden = rnn(inp, hidden)
loss = criterion(output, target.long())
loss.backward()
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 8 # of words in a sequence
# Batch Size
batch_size = 256
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 30
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 400
# Hidden Dimension
hidden_dim = 512
# Number of RNN Layers
n_layers = 3
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 30 epoch(s)...
Epoch: 1/30 Loss: 5.653651537895203
Epoch: 1/30 Loss: 4.90751216506958
Epoch: 1/30 Loss: 4.691600054740905
Epoch: 1/30 Loss: 4.5743850994110105
Epoch: 1/30 Loss: 4.495138036251068
Epoch: 1/30 Loss: 4.409648418903351
Epoch: 2/30 Loss: 4.334919147375153
Epoch: 2/30 Loss: 4.239747682571411
Epoch: 2/30 Loss: 4.22359389591217
Epoch: 2/30 Loss: 4.208342519760132
Epoch: 2/30 Loss: 4.18584727525711
Epoch: 2/30 Loss: 4.1757828798294065
Epoch: 3/30 Loss: 4.122651407389137
Epoch: 3/30 Loss: 4.06463751745224
Epoch: 3/30 Loss: 4.052458032608032
Epoch: 3/30 Loss: 4.053941530227661
Epoch: 3/30 Loss: 4.044666540622711
Epoch: 3/30 Loss: 4.031877816200256
Epoch: 4/30 Loss: 3.993378814158401
Epoch: 4/30 Loss: 3.9424355187416076
Epoch: 4/30 Loss: 3.9386117115020753
Epoch: 4/30 Loss: 3.943090013027191
Epoch: 4/30 Loss: 3.961048149108887
Epoch: 4/30 Loss: 3.950304046630859
Epoch: 5/30 Loss: 3.8977849951119925
Epoch: 5/30 Loss: 3.8512346148490906
Epoch: 5/30 Loss: 3.8793023271560667
Epoch: 5/30 Loss: 3.876558602809906
Epoch: 5/30 Loss: 3.8852452478408814
Epoch: 5/30 Loss: 3.871468797206879
Epoch: 6/30 Loss: 3.833934332054805
Epoch: 6/30 Loss: 3.7882692008018495
Epoch: 6/30 Loss: 3.814844542503357
Epoch: 6/30 Loss: 3.80259716463089
Epoch: 6/30 Loss: 3.837701565742493
Epoch: 6/30 Loss: 3.814786448478699
Epoch: 7/30 Loss: 3.793363190763365
Epoch: 7/30 Loss: 3.7537921266555787
Epoch: 7/30 Loss: 3.7632973465919495
Epoch: 7/30 Loss: 3.760328760147095
Epoch: 7/30 Loss: 3.769103415966034
Epoch: 7/30 Loss: 3.767963858604431
Epoch: 8/30 Loss: 3.7466340959072113
Epoch: 8/30 Loss: 3.7119698834419252
Epoch: 8/30 Loss: 3.7229914646148683
Epoch: 8/30 Loss: 3.7427628273963927
Epoch: 8/30 Loss: 3.731445463180542
Epoch: 8/30 Loss: 3.730895607471466
Epoch: 9/30 Loss: 3.701259089194662
Epoch: 9/30 Loss: 3.6684669823646545
Epoch: 9/30 Loss: 3.683648962497711
Epoch: 9/30 Loss: 3.6998754215240477
Epoch: 9/30 Loss: 3.711835563659668
Epoch: 9/30 Loss: 3.7058543939590454
Epoch: 10/30 Loss: 3.6758872163974172
Epoch: 10/30 Loss: 3.6378495421409607
Epoch: 10/30 Loss: 3.6526355443000793
Epoch: 10/30 Loss: 3.6671325368881225
Epoch: 10/30 Loss: 3.66367187833786
Epoch: 10/30 Loss: 3.7037854652404785
Epoch: 11/30 Loss: 3.6374831652738213
Epoch: 11/30 Loss: 3.613449089050293
Epoch: 11/30 Loss: 3.627688611984253
Epoch: 11/30 Loss: 3.6395210666656492
Epoch: 11/30 Loss: 3.6616045937538146
Epoch: 11/30 Loss: 3.6401246671676635
Epoch: 12/30 Loss: 3.627249090167565
Epoch: 12/30 Loss: 3.58425994682312
Epoch: 12/30 Loss: 3.6095239901542664
Epoch: 12/30 Loss: 3.604109926700592
Epoch: 12/30 Loss: 3.623690320968628
Epoch: 12/30 Loss: 3.6466622977256775
Epoch: 13/30 Loss: 3.5900592549545007
Epoch: 13/30 Loss: 3.5625737166404723
Epoch: 13/30 Loss: 3.580409061431885
Epoch: 13/30 Loss: 3.5951049642562865
Epoch: 13/30 Loss: 3.606347478866577
Epoch: 13/30 Loss: 3.6214797172546387
Epoch: 14/30 Loss: 3.568286045780027
Epoch: 14/30 Loss: 3.5563666639328004
Epoch: 14/30 Loss: 3.5614310340881348
Epoch: 14/30 Loss: 3.576685881137848
Epoch: 14/30 Loss: 3.5805896663665773
Epoch: 14/30 Loss: 3.576421980857849
Epoch: 15/30 Loss: 3.5507966890567686
Epoch: 15/30 Loss: 3.5327212505340575
Epoch: 15/30 Loss: 3.5482060432434084
Epoch: 15/30 Loss: 3.5551900625228883
Epoch: 15/30 Loss: 3.550418635845184
Epoch: 15/30 Loss: 3.56854900932312
Epoch: 16/30 Loss: 3.532493113986845
Epoch: 16/30 Loss: 3.5089347772598267
Epoch: 16/30 Loss: 3.519918125629425
Epoch: 16/30 Loss: 3.524654601097107
Epoch: 16/30 Loss: 3.5563516554832457
Epoch: 16/30 Loss: 3.5414532289505005
Epoch: 17/30 Loss: 3.5278723479771035
Epoch: 17/30 Loss: 3.4890005717277526
Epoch: 17/30 Loss: 3.5037870893478393
Epoch: 17/30 Loss: 3.5105055804252623
Epoch: 17/30 Loss: 3.5204403195381166
Epoch: 17/30 Loss: 3.5319206948280333
Epoch: 18/30 Loss: 3.4999972567325686
Epoch: 18/30 Loss: 3.48245787525177
Epoch: 18/30 Loss: 3.4949358801841734
Epoch: 18/30 Loss: 3.4959473910331726
Epoch: 18/30 Loss: 3.5166872444152832
Epoch: 18/30 Loss: 3.5102215929031373
Epoch: 19/30 Loss: 3.4804890218789017
Epoch: 19/30 Loss: 3.4623445014953615
Epoch: 19/30 Loss: 3.4759190764427186
Epoch: 19/30 Loss: 3.502761978626251
Epoch: 19/30 Loss: 3.4779168181419373
Epoch: 19/30 Loss: 3.508084747314453
Epoch: 20/30 Loss: 3.46266587042227
Epoch: 20/30 Loss: 3.4527813234329225
Epoch: 20/30 Loss: 3.4639769911766054
Epoch: 20/30 Loss: 3.4709736428260802
Epoch: 20/30 Loss: 3.485342619895935
Epoch: 20/30 Loss: 3.4832906169891356
Epoch: 21/30 Loss: 3.4530542858732427
Epoch: 21/30 Loss: 3.4330017304420473
Epoch: 21/30 Loss: 3.4354477338790894
Epoch: 21/30 Loss: 3.4521112327575683
Epoch: 21/30 Loss: 3.4764281101226806
Epoch: 21/30 Loss: 3.4916711301803587
Epoch: 22/30 Loss: 3.4417117162933195
Epoch: 22/30 Loss: 3.4275919208526613
Epoch: 22/30 Loss: 3.42921399307251
Epoch: 22/30 Loss: 3.449096960544586
Epoch: 22/30 Loss: 3.457567615509033
Epoch: 22/30 Loss: 3.467813799381256
Epoch: 23/30 Loss: 3.438223399282471
Epoch: 23/30 Loss: 3.406054844379425
Epoch: 23/30 Loss: 3.4144740524291994
Epoch: 23/30 Loss: 3.40874263381958
Epoch: 23/30 Loss: 3.4570819439888
Epoch: 23/30 Loss: 3.4628853545188902
Epoch: 24/30 Loss: 3.4173108714867415
Epoch: 24/30 Loss: 3.4038249850273132
Epoch: 24/30 Loss: 3.3983858695030214
Epoch: 24/30 Loss: 3.4265176091194154
Epoch: 24/30 Loss: 3.43389954662323
Epoch: 24/30 Loss: 3.4413152842521666
Epoch: 25/30 Loss: 3.4147290468700535
Epoch: 25/30 Loss: 3.390251907348633
Epoch: 25/30 Loss: 3.392515392780304
Epoch: 25/30 Loss: 3.4004006690979005
Epoch: 25/30 Loss: 3.4263303847312927
Epoch: 25/30 Loss: 3.424504452705383
Epoch: 26/30 Loss: 3.387876789502012
Epoch: 26/30 Loss: 3.386119505882263
Epoch: 26/30 Loss: 3.3824402055740355
Epoch: 26/30 Loss: 3.4038275060653684
Epoch: 26/30 Loss: 3.403069474220276
Epoch: 26/30 Loss: 3.422062706947327
Epoch: 27/30 Loss: 3.3852112978939117
Epoch: 27/30 Loss: 3.364732520580292
Epoch: 27/30 Loss: 3.372291766166687
Epoch: 27/30 Loss: 3.3911485948562623
Epoch: 27/30 Loss: 3.4038173732757566
Epoch: 27/30 Loss: 3.4150177001953126
Epoch: 28/30 Loss: 3.3832248524437105
Epoch: 28/30 Loss: 3.3440801153182984
Epoch: 28/30 Loss: 3.3625323009490966
Epoch: 28/30 Loss: 3.392375905036926
Epoch: 28/30 Loss: 3.392505485534668
Epoch: 28/30 Loss: 3.4021625127792356
Epoch: 29/30 Loss: 3.356802045087504
Epoch: 29/30 Loss: 3.3434771904945375
Epoch: 29/30 Loss: 3.3626143341064454
Epoch: 29/30 Loss: 3.3663632860183714
Epoch: 29/30 Loss: 3.3820041136741636
Epoch: 29/30 Loss: 3.391364590167999
Epoch: 30/30 Loss: 3.3617617255303918
Epoch: 30/30 Loss: 3.334050114631653
Epoch: 30/30 Loss: 3.3524353551864623
Epoch: 30/30 Loss: 3.35804864025116
Epoch: 30/30 Loss: 3.369401752948761
Epoch: 30/30 Loss: 3.3722629885673525
Model Trained and Saved
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** All hyperparameters are chosen by previous lectures value, and sequence_lengths is chosen by intuition. Because I think to predict a word, considerable context like 10 words nearby should be known. Learning rate is changed from 0.01 to 0.001, for by former one the loss stuck around 4.0 and stop decreasing. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
jerry: i think you're not gonna get any of this.
jerry: i don't know, you know.
jerry: oh, hi.
elaine: hello!
jerry: hi, it's elaine.
jerry: hello.
jerry: hey, hey, you know, i don't even know what to tell you.
jerry: well you know, it's not the same, but you know, i was wondering that the guy who was in a coma. i mean, if you want a little more, like a.....(jerry nods)
jerry: hey, jerry! i got a great time.
jerry: well, i guess i was just trying to get out of the building. i think he was going to do something like that?(jerry looks at george) hey!
elaine: hey.
george: hey.
elaine: hey.
jerry: hey...
jerry:(cont'd) what are you doing here?
kramer: no, i'm going to be late.
jerry: oh, yeah.
george:(to kramer) hey, you know what? you know, it's a little strange thing!
elaine: oh, no, i don't know what to tell you, i'm going to get some popcorn, and i don't want to know how much i am.
jerry: i think it's a mistake.
george: oh, i think you can get the ball.
george: what is that?
jerry:(to elaine) i can't believe you were going to be a little bit.
george: what is it?
george: well, it might be very good to know what you said, but you know, i think you can do it.
morty:(to kramer) i can't believe it was that! i can't believe that you were going to get out of here.(george nods)
elaine: hi.
jerry: hello.
kramer: hello jerry.
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
_____no_output_____
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
_____no_output_____
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
from collections import Counter
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
word_counts = Counter(text)
# sorting the words from most to least frequent in text occurrence
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
# create int_to_vocab dictionaries
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
tokens = dict()
tokens['.'] = '<PERIOD>'
tokens[','] = '<COMMA>'
tokens['"'] = '<QUOTATION_MARK>'
tokens[';'] = '<SEMICOLON>'
tokens['!'] = '<EXCLAMATION_MARK>'
tokens['?'] = '<QUESTION_MARK>'
tokens['('] = '<LEFT_PAREN>'
tokens[')'] = '<RIGHT_PAREN>'
tokens['?'] = '<QUESTION_MARK>'
tokens['-'] = '<DASH>'
tokens['\n'] = '<NEW_LINE>'
return tokens
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
y_len = len(words) - sequence_length
x, y = [], []
for idx in range(0, y_len):
idx_end = sequence_length + idx
x_batch = words[idx:idx_end]
x.append(x_batch)
# print("feature: ",x_batch)
batch_y = words[idx_end]
# print("target: ", batch_y)
y.append(batch_y)
# create Tensor datasets
data = TensorDataset(torch.from_numpy(np.asarray(x)), torch.from_numpy(np.asarray(y)))
# make sure the SHUFFLE your training data
data_loader = DataLoader(data, shuffle=False, batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]])
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5,lr=0.001):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# define embedding layer
self.embedding = nn.Embedding(vocab_size, embedding_dim)
## Define the LSTM
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# Define the final, fully-connected output layer
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
batch_size = nn_input.size(0)
# embeddings and lstm_out
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
out = self.fc(lstm_out)
# reshape into (batch_size, seq_length, output_size)
out = out.view(batch_size, -1, self.output_size)
# get last batch
out = out[:, -1]
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Create two new tensors with sizes n_layers x batch_size x hidden_dim,
# initialized to zero, for hidden state and cell state of LSTM
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# move model to GPU, if available
if(train_on_gpu):
rnn.cuda()
# # Creating new variables for the hidden state, otherwise
# # we'd backprop through the entire training history
h = tuple([each.data for each in hidden])
# zero accumulated gradients
rnn.zero_grad()
if(train_on_gpu):
inputs, target = inp.cuda(), target.cuda()
# print(h[0].data)
# get predicted outputs
output, h = rnn(inputs, h)
# calculate loss
loss = criterion(output, target)
# optimizer.zero_grad()
loss.backward()
# 'clip_grad_norm' helps prevent the exploding gradient problem in RNNs / LSTMs
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
return loss.item(), h
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 250
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 2000
print(len(vocab_to_int))
###Output
21388
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 4.940719823598862
Epoch: 1/10 Loss: 4.498707127332687
Epoch: 1/10 Loss: 4.355592090010643
Epoch: 2/10 Loss: 4.11676691709503
Epoch: 2/10 Loss: 3.943764536857605
Epoch: 2/10 Loss: 3.898621161222458
Epoch: 3/10 Loss: 3.8112251646917144
Epoch: 3/10 Loss: 3.7209423907995225
Epoch: 3/10 Loss: 3.694894492983818
Epoch: 4/10 Loss: 3.656496029899448
Epoch: 4/10 Loss: 3.5812808928489686
Epoch: 4/10 Loss: 3.5513569011688233
Epoch: 5/10 Loss: 3.5290392236407553
Epoch: 5/10 Loss: 3.471140544652939
Epoch: 5/10 Loss: 3.4477819299697874
Epoch: 6/10 Loss: 3.436254263325843
Epoch: 6/10 Loss: 3.3901546934843063
Epoch: 6/10 Loss: 3.368346190929413
Epoch: 7/10 Loss: 3.3658659773052864
Epoch: 7/10 Loss: 3.3298056559562683
Epoch: 7/10 Loss: 3.304908910870552
Epoch: 8/10 Loss: 3.309646376140034
Epoch: 8/10 Loss: 3.279695846915245
Epoch: 8/10 Loss: 3.2542127801179888
Epoch: 9/10 Loss: 3.2628642196121884
Epoch: 9/10 Loss: 3.2398793606758116
Epoch: 9/10 Loss: 3.210788159966469
Epoch: 10/10 Loss: 3.2232657392230637
Epoch: 10/10 Loss: 3.2002066918611525
Epoch: 10/10 Loss: 3.1699355088472365
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** Going over the course material regarding embedding, I noticed that typical embedding dimensions are around 200 - 300 in size.Upon reading from different sources: https://arxiv.org/pdf/1707.06799.pdf https://github.com/wojzaremba/lstm/blob/76870253cfca069477f06b7056af87f98490b6eb/main.luaL44 https://machinelearningmastery.com/tune-lstm-hyperparameters-keras-time-series-forecasting/ as well as going over the course examples (Skip-gram Word2Vec, Simple RNN, Sentiment Analysis with an RNN) and onlder courses intuition, I chose the parameters.I tried: sequence_length = 10, batch_size = 64, learning_rate = 0.01, embedding_dim = 200, hidden_dim = 200, n_layers = 2. Started with loss 9.25 and after 4 epochs the loss was still around 9.26. sequence_length = 10, batch_size = 64, learning_rate = 0.003, embedding_dim = 300, hidden_dim = 250, n_layers = 2 Started with Loss: 9.202159190654754 and at epoch 4 it was Loss: 9.206429640371343 sequence_length = 20, batch_size = 20, learning_rate = 0.3, embedding_dim = 300, hidden_dim = 250, n_layers = 2 Started with Loss: 9.70091618013382, and at epoch 4 it was still around 9.6 sequence_length = 20, batch_size = 124, learning_rate = 1, embedding_dim = 200, hidden_dim = 200, n_layers = 2 Started with Epoch: 1/10 Loss: 9.50547212076187 ..At this point i realized I have some bugs in my code related to zero_grad, extra dropout layer and sigmoid layer. Fixed issues and retried: sequence_length = 10, batch_size = 128, learning_rate = 0.001, embedding_dim = 200, hidden_dim = 250, n_layers = 2 Started with: Training for 10 epoch(s)... Epoch: 1/10 Loss: 4.944083527803421 ... Epoch: 4/10 Loss: 3.5780555000305174 ... Epoch: 7/10 Loss: 3.3266124720573425 ... sequence_length = 10, batch_size = 124, learning_rate = 0.1, embedding_dim = 200, hidden_dim = 200, n_layers = 2 Started with Training for 10 epoch(s)... Epoch: 1/10 Loss: 5.481069218158722 Epoch: 2/10 Loss: 5.025624033570289 Epoch: 3/10 Loss: 4.981013494968415I stopped here, because, even if it was decreasing it seemd to converge way slower than the previous experiment with a lower learning rate and a slightly bigger hidden_dim. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:43: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_2.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
# define model layers
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# return one batch of output word scores and the hidden state
return None, None
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
_____no_output_____
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
# perform backpropagation and optimization
# return the loss over a batch and the hidden state produced by our model
return None, None
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
_____no_output_____
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = # of words in a sequence
# Batch Size
batch_size =
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs =
# Learning Rate
learning_rate =
# Model parameters
# Vocab size
vocab_size =
# Output size
output_size =
# Embedding Dimension
embedding_dim =
# Hidden Dimension
hidden_dim =
# Number of RNN Layers
n_layers =
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
_____no_output_____
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
_____no_output_____
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_counts = Counter(text)
# sorting the words from most to least frequent in text occurrence
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
# create int_to_vocab dictionaries
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
# return tuple
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
'.', ',', '"', ';', '!', '?', '(', ')', '-', '\n'
dict_token = {'.':'||period||',
',':'||comma||',
'"':'||quotation_mark||',
';':'||semicolon||',
'!':'||exclamation_mark||',
'?':'||question_mark||',
'(':'||left_parentheses||',
')':'||right_parentheses||',
'-':'||dash||',
'\n':'||return'}
return dict_token
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
features = []
targets = []
for i in range(0, len(words) - sequence_length):
features.append(words[i:i+sequence_length])
targets.append(words[i+sequence_length])
features = np.array(features)
targets = np.array(targets)
data = TensorDataset(torch.from_numpy(features),
torch.from_numpy(targets))
data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size, shuffle=True)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
# test dataloader
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[12, 13, 14, 15, 16],
[ 6, 7, 8, 9, 10],
[21, 22, 23, 24, 25],
[39, 40, 41, 42, 43],
[24, 25, 26, 27, 28],
[36, 37, 38, 39, 40],
[29, 30, 31, 32, 33],
[18, 19, 20, 21, 22],
[30, 31, 32, 33, 34],
[ 8, 9, 10, 11, 12]])
torch.Size([10])
tensor([17, 11, 26, 44, 29, 41, 34, 23, 35, 13])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.vocab_size = vocab_size
self.output_size = output_size
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.n_layers = n_layers
# define model layers
self.embedding = nn.Embedding(self.vocab_size, self.embedding_dim)
self.lstm = nn.LSTM(self.embedding_dim, self.hidden_dim, self.n_layers,
dropout=dropout, batch_first=True)
# linear layers
self.fc = nn.Linear(self.hidden_dim, self.output_size)
# linear and sigmoid layers
self.fc = nn.Linear(hidden_dim, output_size)
self.log = nn.LogSoftmax(dim=1)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# return one batch of output word scores and the hidden state
batch_size = nn_input.size(0)
# embeddings and lstm_out
nn_input = nn_input.long()
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
# out = self.dropout(lstm_out)
out = self.fc(lstm_out)
# log softmax function
out = self.log(out)
# reshape to be batch_size first
out = out.view(batch_size, -1, self.output_size)
out = out[:, -1] # get last batch of labels
# return last sigmoid output and hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
clip=5 # gradient clipping
# TODO: Implement Function
# move data to GPU, if available
if(train_on_gpu):
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
hidden = tuple([each.data for each in hidden])
# zero accumulated gradients
rnn.zero_grad()
# get the output from the model
output, hidden = rnn(inp, hidden)
# calculate the loss and perform backprop
loss = criterion(output.squeeze(), target.long())
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(rnn.parameters(), clip)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 64
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 20
# Hidden Dimension
hidden_dim = 512
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 5.736658761024475
Epoch: 1/10 Loss: 5.125327094554901
Epoch: 1/10 Loss: 4.991543843746185
Epoch: 1/10 Loss: 4.907192386627197
Epoch: 1/10 Loss: 4.817068107604981
Epoch: 1/10 Loss: 4.705086775302887
Epoch: 1/10 Loss: 4.692974278450012
Epoch: 1/10 Loss: 4.611390564441681
Epoch: 1/10 Loss: 4.589373518943787
Epoch: 1/10 Loss: 4.53254309463501
Epoch: 1/10 Loss: 4.532573010444641
Epoch: 1/10 Loss: 4.475981976509094
Epoch: 1/10 Loss: 4.458737111091613
Epoch: 1/10 Loss: 4.410513641834259
Epoch: 1/10 Loss: 4.462261398315429
Epoch: 1/10 Loss: 4.384904719829559
Epoch: 1/10 Loss: 4.412528932571411
Epoch: 1/10 Loss: 4.341090875625611
Epoch: 1/10 Loss: 4.365277145385742
Epoch: 1/10 Loss: 4.313001236438751
Epoch: 1/10 Loss: 4.348100092411041
Epoch: 1/10 Loss: 4.274254346847534
Epoch: 1/10 Loss: 4.309982522010803
Epoch: 1/10 Loss: 4.28107989025116
Epoch: 1/10 Loss: 4.252639470100402
Epoch: 1/10 Loss: 4.2668794503211975
Epoch: 1/10 Loss: 4.295347595214844
Epoch: 2/10 Loss: 4.178901625644412
Epoch: 2/10 Loss: 4.093178703784942
Epoch: 2/10 Loss: 4.061955538272858
Epoch: 2/10 Loss: 4.044850610256195
Epoch: 2/10 Loss: 4.07085853767395
Epoch: 2/10 Loss: 4.084657386779785
Epoch: 2/10 Loss: 4.02017689704895
Epoch: 2/10 Loss: 4.092840795516968
Epoch: 2/10 Loss: 4.055262157440185
Epoch: 2/10 Loss: 4.075079744338989
Epoch: 2/10 Loss: 4.043724298477173
Epoch: 2/10 Loss: 4.027819560527801
Epoch: 2/10 Loss: 4.025751362800598
Epoch: 2/10 Loss: 4.05658544254303
Epoch: 2/10 Loss: 4.0108096189498905
Epoch: 2/10 Loss: 4.0870779485702515
Epoch: 2/10 Loss: 4.060791593551635
Epoch: 2/10 Loss: 4.072946607112884
Epoch: 2/10 Loss: 4.034135373115539
Epoch: 2/10 Loss: 4.06848963546753
Epoch: 2/10 Loss: 4.0566342115402225
Epoch: 2/10 Loss: 4.021756707191467
Epoch: 2/10 Loss: 4.03763682603836
Epoch: 2/10 Loss: 4.055242288589477
Epoch: 2/10 Loss: 4.046143504142761
Epoch: 2/10 Loss: 3.9976012234687803
Epoch: 2/10 Loss: 4.015362548828125
Epoch: 3/10 Loss: 3.9316128298116566
Epoch: 3/10 Loss: 3.841157184123993
Epoch: 3/10 Loss: 3.823253267288208
Epoch: 3/10 Loss: 3.8141412796974183
Epoch: 3/10 Loss: 3.842183629989624
Epoch: 3/10 Loss: 3.827045913696289
Epoch: 3/10 Loss: 3.81816468000412
Epoch: 3/10 Loss: 3.871703085899353
Epoch: 3/10 Loss: 3.8313865823745727
Epoch: 3/10 Loss: 3.824892319202423
Epoch: 3/10 Loss: 3.835803173542023
Epoch: 3/10 Loss: 3.8363525438308717
Epoch: 3/10 Loss: 3.865915764808655
Epoch: 3/10 Loss: 3.8483505210876463
Epoch: 3/10 Loss: 3.873537076473236
Epoch: 3/10 Loss: 3.884829065799713
Epoch: 3/10 Loss: 3.880577545642853
Epoch: 3/10 Loss: 3.8969109983444215
Epoch: 3/10 Loss: 3.852950508594513
Epoch: 3/10 Loss: 3.8667793421745302
Epoch: 3/10 Loss: 3.88125945186615
Epoch: 3/10 Loss: 3.878620346069336
Epoch: 3/10 Loss: 3.8892639956474304
Epoch: 3/10 Loss: 3.8855131974220276
Epoch: 3/10 Loss: 3.9142270221710205
Epoch: 3/10 Loss: 3.8740358600616456
Epoch: 3/10 Loss: 3.8966314964294435
Epoch: 4/10 Loss: 3.764946259883519
Epoch: 4/10 Loss: 3.621879717826843
Epoch: 4/10 Loss: 3.6472070951461792
Epoch: 4/10 Loss: 3.644763184547424
Epoch: 4/10 Loss: 3.6753933601379396
Epoch: 4/10 Loss: 3.68244918012619
Epoch: 4/10 Loss: 3.664783829689026
Epoch: 4/10 Loss: 3.7078113021850587
Epoch: 4/10 Loss: 3.720352824211121
Epoch: 4/10 Loss: 3.6864221034049987
Epoch: 4/10 Loss: 3.698038475036621
Epoch: 4/10 Loss: 3.7350022139549255
Epoch: 4/10 Loss: 3.7252344789505005
Epoch: 4/10 Loss: 3.7419666609764097
Epoch: 4/10 Loss: 3.7479772901535036
Epoch: 4/10 Loss: 3.75286576461792
Epoch: 4/10 Loss: 3.743564266204834
Epoch: 4/10 Loss: 3.720660849571228
Epoch: 4/10 Loss: 3.773791268825531
Epoch: 4/10 Loss: 3.742883951187134
Epoch: 4/10 Loss: 3.771033992290497
Epoch: 4/10 Loss: 3.769046573162079
Epoch: 4/10 Loss: 3.768140766620636
Epoch: 4/10 Loss: 3.7594512996673584
Epoch: 4/10 Loss: 3.8039037947654726
Epoch: 4/10 Loss: 3.772921691894531
Epoch: 4/10 Loss: 3.8233641571998596
Epoch: 5/10 Loss: 3.6463201233881097
Epoch: 5/10 Loss: 3.539736171245575
Epoch: 5/10 Loss: 3.538268340587616
Epoch: 5/10 Loss: 3.559518147468567
Epoch: 5/10 Loss: 3.543226071357727
Epoch: 5/10 Loss: 3.5728639197349548
Epoch: 5/10 Loss: 3.5837946224212645
Epoch: 5/10 Loss: 3.6096061220169067
Epoch: 5/10 Loss: 3.5774189486503603
Epoch: 5/10 Loss: 3.5845006217956543
Epoch: 5/10 Loss: 3.588516098022461
Epoch: 5/10 Loss: 3.6028920378684997
Epoch: 5/10 Loss: 3.6090316624641416
Epoch: 5/10 Loss: 3.6190820560455323
Epoch: 5/10 Loss: 3.6119179525375364
Epoch: 5/10 Loss: 3.6212544388771057
Epoch: 5/10 Loss: 3.6378393816947936
Epoch: 5/10 Loss: 3.655143165588379
Epoch: 5/10 Loss: 3.5992249417304993
Epoch: 5/10 Loss: 3.6523594312667846
Epoch: 5/10 Loss: 3.651148400783539
Epoch: 5/10 Loss: 3.6515752282142637
Epoch: 5/10 Loss: 3.6820572514534
Epoch: 5/10 Loss: 3.660417200565338
Epoch: 5/10 Loss: 3.6775714192390443
Epoch: 5/10 Loss: 3.6552570204734804
Epoch: 5/10 Loss: 3.690795940876007
Epoch: 6/10 Loss: 3.4234047689437865
Epoch: 6/10 Loss: 3.4138656578063964
Epoch: 6/10 Loss: 3.4833018317222595
Epoch: 6/10 Loss: 3.463183061122894
Epoch: 6/10 Loss: 3.4603938026428223
Epoch: 6/10 Loss: 3.474855549812317
Epoch: 6/10 Loss: 3.4998416175842286
Epoch: 6/10 Loss: 3.4772879576683042
Epoch: 6/10 Loss: 3.492059448719025
Epoch: 6/10 Loss: 3.524359532356262
Epoch: 6/10 Loss: 3.523784945964813
Epoch: 6/10 Loss: 3.5232100682258607
Epoch: 6/10 Loss: 3.520945044517517
Epoch: 6/10 Loss: 3.533334993362427
Epoch: 6/10 Loss: 3.5381511759757998
Epoch: 6/10 Loss: 3.5819815135002138
Epoch: 6/10 Loss: 3.550620337963104
Epoch: 6/10 Loss: 3.540581825733185
Epoch: 6/10 Loss: 3.5664678201675417
Epoch: 6/10 Loss: 3.5843753423690794
Epoch: 6/10 Loss: 3.5934629945755003
Epoch: 6/10 Loss: 3.567530584812164
Epoch: 6/10 Loss: 3.597407069683075
Epoch: 6/10 Loss: 3.596396448135376
Epoch: 6/10 Loss: 3.574269030570984
Epoch: 7/10 Loss: 3.452623408950302
Epoch: 7/10 Loss: 3.351450464248657
Epoch: 7/10 Loss: 3.365470413684845
Epoch: 7/10 Loss: 3.3561359429359436
Epoch: 7/10 Loss: 3.3450084023475646
Epoch: 7/10 Loss: 3.3796080207824706
Epoch: 7/10 Loss: 3.3846295371055604
Epoch: 7/10 Loss: 3.4034991483688355
Epoch: 7/10 Loss: 3.367743728160858
Epoch: 7/10 Loss: 3.433341740608215
Epoch: 7/10 Loss: 3.4135329275131228
Epoch: 7/10 Loss: 3.42270102930069
Epoch: 7/10 Loss: 3.4442732038497925
Epoch: 7/10 Loss: 3.4657301139831542
Epoch: 7/10 Loss: 3.4642896213531493
Epoch: 7/10 Loss: 3.4376904215812685
Epoch: 7/10 Loss: 3.4494236745834352
Epoch: 7/10 Loss: 3.459057442188263
Epoch: 7/10 Loss: 3.457058773994446
Epoch: 7/10 Loss: 3.4917862105369566
Epoch: 7/10 Loss: 3.463878562450409
Epoch: 7/10 Loss: 3.481965585231781
Epoch: 7/10 Loss: 3.4929272351264955
Epoch: 7/10 Loss: 3.5187024540901186
Epoch: 7/10 Loss: 3.510863247871399
Epoch: 7/10 Loss: 3.4997693314552305
Epoch: 7/10 Loss: 3.5230594906806947
Epoch: 8/10 Loss: 3.3617347051033453
Epoch: 8/10 Loss: 3.2716021885871887
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here)First, I set hidden_dim = 128, but loss stop decreasing around 3.8.Next, hidden_dim = 256, it doesn't work.Learning rate 0.01 is bad and embedding_dim = 50, loss stopped around 3.5 at 10 epoch.Finally, set hidden_dim=512, it's OK --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
jerry:.
jerry: i mean, if they had a little slow, you were gonna be here in a while.
elaine: yeah, yeah, yeah. yeah....
george: i think you should go.
jerry: you know i just remembered, i would never be with a...
george: i mean, i just wanted to know what it was. i mean i was just a lesbian actor and it was a good idea. i mean, i was just wondering, you don't want to go.
newman: well, i didn't get any plantains from this.
jerry: well, i was just curious.
george: well, i don't know. i was a hipster dufus. i mean, what did he do?
george: you don't understand?
kramer: yeah.
jerry: yeah, i got a message to get you a brand meal.
jerry: what?
george: i don't think i want.
george: oh, no no. i was thinking.
jerry: no.
kramer: well, i was just trying to get a new new friends.
george: i can't believe it!
kramer:(laughing) hey, hey, i gotta get this.
kramer: well, you know, i don't know, you know, the phillips millers, gritty, the horror.
jerry:(sarcastically) oh, my god!
elaine: i mean, i think i can do it!
elaine: what do you want?
george: i don't know.
jerry: you mean you want to be a communist?
kramer: oh no no, i was just trying to get a little flash.
kramer: oh, you got a problem with a girl in your farina.
jerry: what is this?
george: well, i don't know, i can't... i mean...
jerry: i don't know what to do. i'm gonna go to
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
from itertools import islice
def create_lookup_tables(words):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
words_counter = Counter(words)
vocab_to_int = {pair[0]: idx for idx, pair in enumerate(words_counter.most_common())}
int_to_vocab = {val: key for key, val in vocab_to_int.items()}
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
return {
'.': '||Period||',
',': '||Comma||',
'"': '||Quotation_Mark||',
';': '||Semicolon||',
'!': '||Exclamation_Mark||',
'?': '||Question_Mark||',
'(': '||Left_Parentheses||',
')': '||Right_Parentheses||',
'-': '||Dash||',
'\n': '||Return||'}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
import numpy as np
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
features = [list(words[i:i+sequence_length]) for i in range(len(words) - sequence_length + 1)]
targets = [words[i + sequence_length] for i in range(len(words) - sequence_length)] + [words[0]]
'''
# Below are some prints I found useful when working this
print('total word count: {}'.format(len(words)))
print('total feature count: {}'.format(len(features)))
print('total target count: {}'.format(len(targets)))
print('first feature: {}'.format(features[:1]))
print('first target: {}'.format(targets[:1]))
print('last 2 features: {}'.format(features[-2:]))
print('last 2 targets: {}'.format(targets[-2:]))
print(features[0][0])
print(targets[0])
'''
batch_length = len(features)//batch_size
dataset = TensorDataset(torch.LongTensor(features[:batch_length * batch_size]), torch.LongTensor(targets[:batch_length * batch_size]))
loader = DataLoader(dataset, batch_size=batch_size, shuffle = True)
return loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
# batch_data(range(50), 5, 10)
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
# Checking multiple ranges so that we check when the targets wrap and when they do not and then more just 'cause
for range_length in range(50, 60):
sequence_length = 5
batch_size = 10
test_text = range(range_length)
t_loader = batch_data(test_text, sequence_length=sequence_length, batch_size=batch_size)
data_iter = iter(t_loader)
# Shuffle will let us see the entire corpus of features before shuffling again
# Given that the first sequence starts at idx 0 and ends at idx of sequence -1 inclusive
# and that the last sequence will end at idx of len(test_text) - 1 inclusive [49]
# we will have len(test_text) - (sequence - 1) items [46]
# An intuitive way of saying this it to see that the sequences should start with numbers 0,1,2,...,43,44,45
# So we should need to iterate the number of sequences divided by the batch size to find all the items
expected_length = (range_length - (sequence_length - 1))//batch_size
assert len(t_loader) == expected_length
for i in range(expected_length):
x, y = data_iter.next()
assert x.shape[0] == batch_size
assert x.shape[1] == sequence_length
for i in range(sequence_length - 1):
# x should be sequential
assert torch.eq(x[:,i] + 1, x[:,i+1]).all()
# If we reshape x to look at just the last numbers in the sequence and add one then that should be y
expected_y = x[:,-1] + 1
expected_y[expected_y==range_length] = 0
assert torch.eq(expected_y, y).all()
###Output
_____no_output_____
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define model layers
self.embed = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,
dropout=dropout, batch_first=True)
self.dropout = nn.Dropout(dropout)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, x, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
batch_size = x.size(0)
embeds = self.embed(x)
lstm_out, hidden = self.lstm(embeds, hidden)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
out = self.dropout(lstm_out)
out = self.fc(out)
out = out.view(batch_size, -1, self.output_size)
# return one batch of output word scores and the hidden state
return out[:, -1], hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
weight = next(self.parameters()).data
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
if (train_on_gpu):
hidden = (hidden[0].cuda(), hidden[1].cuda())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
clip = 5
def forward_back_prop(rnn, optimizer, criterion, inputs, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inputs: A batch of input to the neural network
:param target: The target output for the batch of input
:param hidden: The last hidden state Tensor
:return: The loss and the latest hidden state Tensor
"""
# move data to GPU, if available
rnn.train()
if(train_on_gpu):
inputs, target = inputs.cuda(), target.cuda()
# limiting the depth of our backprop
hidden = tuple([each.data for each in hidden])
rnn.zero_grad()
output, hidden = rnn(inputs, hidden)
# perform backpropagation and optimization
loss = criterion(output, target)
loss.backward()
nn.utils.clip_grad_norm_(rnn.parameters(), clip)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100, min_loss=np.inf):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
loss = np.average(batch_losses)
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, loss))
batch_losses = []
if loss < min_loss:
helper.save_model('./save/trained_rnn', rnn)
print('Model Trained and Saved')
min_loss = loss
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 12 # of words in a sequence
# Batch Size
batch_size = 256
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 4
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = len(vocab_to_int)
# Embedding Dimension
embedding_dim = 400
# Hidden Dimension
hidden_dim = 512
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
# create model and move to gpu if available
#rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
rnn = helper.load_model('./save/trained_rnn')
min_loss = 3.497277446269989
learning_rate = 0.0001
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from workspace_utils import active_session
# training the model
with active_session():
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches, min_loss)
# saving the trained model
#helper.save_model('./save/trained_rnn', trained_rnn)
#print('Model Trained and Saved')
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
_____no_output_____
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** I started with hyperparameters from our character rnn model. I then chose an embedding_dim from the sentiment rnn. This was taking a very long time to converge so I hit the slack channel and took a look at what others were doing and made some adjustments. I dropped the segment length which makes sense, since 100 characters are probably far fewer words. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 1024 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:37: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# DONE: Implement Function
words_counter = Counter(text)
sorted_words = sorted(words_counter, key=words_counter.get, reverse=True)
int_to_vocab = {ii: word for ii, word in enumerate(sorted_words)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# DONE: Implement Function
dict_punct = {'.': '<<Period>>',
',': '<<Comma>>',
'"': '<<Quotation_Mark>>',
';': '<<Semicolon>>',
'!': '<<Exclamation_Mark>>',
'?': '<<Question_Mark>>',
'(': '<<Left_Parentheses',
')': '<<Right_Parentheses',
'-': '<<Dash>>',
'\n': '<<Return>>'}
return dict_punct
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# DONE: Implement function
# get maximum number of FULL batches
num_batches = len(words)//batch_size
# only full batches - cut end of the words list which will not create full batch
words = words[:num_batches*batch_size]
features = []
targets = []
last_batch_start_idx = len(words)-sequence_length
# iterate through words
for idx in range(0, last_batch_start_idx):
# extract features
features.append(words[idx:idx+sequence_length])
# extract target
try:
targets.append(words[idx+sequence_length])
except IndexError:
# if there are not enought words in list for last batch, add 0 as target
targets.append(0)
train_data = TensorDataset(torch.from_numpy(np.asarray(features)), torch.from_numpy(np.asarray(targets)))
train_loader = DataLoader(train_data, shuffle=True, batch_size=batch_size)
# return a dataloader
return train_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(111)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 94, 95, 96, 97, 98],
[ 58, 59, 60, 61, 62],
[ 85, 86, 87, 88, 89],
[ 20, 21, 22, 23, 24],
[103, 104, 105, 106, 107],
[ 98, 99, 100, 101, 102],
[ 36, 37, 38, 39, 40],
[ 37, 38, 39, 40, 41],
[ 32, 33, 34, 35, 36],
[ 4, 5, 6, 7, 8]])
torch.Size([10])
tensor([ 99, 63, 90, 25, 108, 103, 41, 42, 37, 9])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# DONE: Implement function
# set class variables
self.vocab_size = vocab_size
self.output_size = output_size
self.embed_dim = embedding_dim
self.hidden_dim = hidden_dim
self.n_layers = n_layers
self.dropout_p = dropout
# define model layers
self.embed = nn.Embedding(self.vocab_size,
self.embed_dim)
self.lstm = nn.LSTM(input_size =self.embed_dim,
hidden_size =self.hidden_dim,
num_layers =self.n_layers,
dropout =self.dropout_p,
batch_first =True)
self.drop = nn.Dropout(0.25)
self.fc = nn.Linear(self.hidden_dim,
self.output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# DONE: Implement function
batch_size = nn_input.size(0)
# embeddings layer
encoded = self.embed(nn_input)
# lstm layer
lstm_out, hidden = self.lstm(encoded, hidden)
# stacking outputs of the lstm
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# fully connected layer
output = self.fc(lstm_out)
# reshape into (batch_size, seq_length, output_size)
output = output.view(batch_size, -1, self.output_size)
# get last batch
out = output[:, -1]
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# DONE: Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# DONE: Implement Function
# move model to GPU, if available
if train_on_gpu:
rnn.cuda()
# move data to GPU, if available
if train_on_gpu:
inp, target = inp.cuda(), target.cuda()
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
hidden_t = tuple([each.data for each in hidden])
# clear gradients
rnn.zero_grad()
# feed-dorward
rnn_out, rnn_hid = rnn(inp, hidden_t)
# perform backpropagation and optimization
loss = criterion(rnn_out, target)
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
# update weights
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), rnn_hid
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 256
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 400
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 1000
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 4.9627698495388035
Epoch: 1/10 Loss: 4.386822091341019
Epoch: 1/10 Loss: 4.214413766145706
Epoch: 2/10 Loss: 4.038146335533641
Epoch: 2/10 Loss: 3.9504165222644807
Epoch: 2/10 Loss: 3.9148817129135134
Epoch: 3/10 Loss: 3.798500166338035
Epoch: 3/10 Loss: 3.7591107790470124
Epoch: 3/10 Loss: 3.74398533987999
Epoch: 4/10 Loss: 3.6425970394870193
Epoch: 4/10 Loss: 3.628464448213577
Epoch: 4/10 Loss: 3.6338286848068235
Epoch: 5/10 Loss: 3.5464783320938182
Epoch: 5/10 Loss: 3.5270272369384768
Epoch: 5/10 Loss: 3.540618770360947
Epoch: 6/10 Loss: 3.461918436669049
Epoch: 6/10 Loss: 3.4494025118350984
Epoch: 6/10 Loss: 3.468783838510513
Epoch: 7/10 Loss: 3.38561912010589
Epoch: 7/10 Loss: 3.383038095712662
Epoch: 7/10 Loss: 3.4104531280994417
Epoch: 8/10 Loss: 3.3363313533451135
Epoch: 8/10 Loss: 3.3289666635990143
Epoch: 8/10 Loss: 3.345423321723938
Epoch: 9/10 Loss: 3.284445214094701
Epoch: 9/10 Loss: 3.2772184579372405
Epoch: 9/10 Loss: 3.3074152629375457
Epoch: 10/10 Loss: 3.239664696071571
Epoch: 10/10 Loss: 3.2348797211647033
Epoch: 10/10 Loss: 3.2674734783172608
Model Trained and Saved
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** As a starting point, I tried combination of params from previous notebooks + few own ideas (sequence_length=16, batch_size=64, lr=0.01, embed_dim=400, hidden_dim=256, layers=3). Unsuccessful: loss oscilated between 6.0--6.2 over first 2 epochs, so I stoped training.In 1st modification I changed sequence_length=10. Seems that loss is able to go lower (oscilating around 5.9 over first 2 epochs), but still not converging.In 2nd modification I changed learning_rate to 0.005 and after that to 0.001, which seems to finally helped with convergence. I also boosted batch_size to 256 which fit into GPU memory without any issues and massivelly speeded the training up. These mods finily led into loss < 3.5 and generated funky script :)3rd modification was about n_layers, I changed them to 2. Seems that it helped getting loss even a little lower, and more reasonable generated scripts! --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 500 # modify the length to your preference
prime_word = 'george' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
george: i have to be a little more flexible on the table, but i don't know. i just wanted to be in my apartment for a while, you know, the only thing is, i would have been a little more flexible. i mean, you know, i think they would be happy to hear you.
kramer:(to jerry) you know, i think i could go in there.
elaine: i thought you could have a lot of time with this guy, i was just curious.
george:(to george) you know i think i was a little bit about the whole thing.
elaine: oh yeah, yeah.
jerry: so you have a problem?
george: yeah.
elaine: i know what the problem is.
elaine: i don't know.
jerry: i don't know.
elaine: i don't know.
elaine: well i don't know.
elaine: what?
george:(on the phone) oh. i know.
jerry: i think i should.
elaine:(to elaine) you know, the only thing i have, and the punches creek and i don't get it, i have to go to the airport.
newman: what?
jerry: what do you need?
kramer: i don't know.
jerry: i thought you could have said something?
elaine: oh no no...
kramer: oh, i know!
kramer: well, i just don't know how to get it.
jerry: oh my god!
jerry: what is this?
jerry: i don't know.
jerry:(to jerry) hey.
jerry: hey look, i don't have a square. you know, i don't know what the hell i said, i know you would be a little bit about it.
george: i can't get a little more stable.
jerry: i can't.
jerry:(to himself) hey, what is that?!
kramer:(smiling) oh, yeah.
kramer: oh, yeah, right. yeah.(they both look at the table)
[setting: the costanza's house]
kramer: oh, yeah.(they both shake up to the table)
kramer: yeah, well...
jerry: i think you were going to get married soon.
george: i don't think so.
george: oh, no, no. i don't know how you feel.
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("data/generated/generated_script_3.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
import os
os.environ['CUDA_LAUNCH_BLOCKING'] = "1"
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
from collections import Counter
words = text.split(' ')
print(words[:5])
print(len(words))
# remove 'empty' words
words = [word for word in words if word != '']
counts = Counter(words)
#if '' in words:
# words.remove('')
counts = Counter(words)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
## Build a dictionary that maps words to integers
counts = Counter(text)
vocab = sorted(counts, key=counts.get, reverse=True)
int_to_vocab = dict(enumerate(vocab,1))
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
ptoken ={'.':'||period||',
',':'||Comma||',
'"':'||Quotation_Mark||',
';':'||Semicolon||',
'!':'||Exclamation_Mark||',
'?':'||Question_Mark||',
'(':'||Left_Parantheses||',
')':'||Right_Parantheses||',
'-':'||Dash||',
'\n':'||Return||'
}
return ptoken
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
torch.cuda.empty_cache()
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# get total number of items:
batch_size_total = batch_size * sequence_length
# total number of batches we can make
n_batches = len(words)//batch_size_total
# Keep only enough characters to make full batches
words = words[:n_batches * batch_size_total]
# check type on words...
if(type(words)==list):
words = np.array(words)
elif(type(words)==np.ndarray):
pass
else:
raise ValueError('input data is neither np-array or list, it is {}'.format(type(words)))
# Reshape into batch_size rows
words = words.reshape((-1, sequence_length))
# set the tagets as each next word, set last one to be first one...
targets = np.roll(words[:,0],-1)
# return a dataloader
data = TensorDataset(torch.from_numpy(words), torch.from_numpy(targets))
return DataLoader(data, shuffle=True, batch_size=batch_size)
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
# in python 3 need to make list out of range...
test_text = list(range(50))
t_loader = batch_data(test_text, sequence_length=4, batch_size=2)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([2, 4])
tensor([[ 32, 33, 34, 35],
[ 4, 5, 6, 7]])
torch.Size([2])
tensor([ 36, 8])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.n_layers = n_layers
self.dropout = dropout
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,
dropout=dropout, batch_first=True)
# dropout layer
self.dropout = nn.Dropout(dropout)
# linear and sigmoid layers
self.fc = nn.Linear(hidden_dim, output_size)
self.sig = nn.Sigmoid()
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
# embeddings and lstm_out
embeds = self.embedding(nn_input)
self.lstm.flatten_parameters()
output, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
output = output.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
#output = self.dropout(output)
output = self.fc(output)
# sigmoid function
#output = self.sig(output)
# reshape into (batch_size, seq_length, output_size)
output = output.view(batch_size, -1, self.output_size)
# get last batch
out = output[:, -1]
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if(train_on_gpu):
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
# print('Memory allocated: {} MB'.format(torch.cuda.memory_allocated() / 1024**2))
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
h = tuple([each.data for each in hidden])
# zero accumulated gradients
rnn.zero_grad()
# get the output from the model
output, hidden = rnn(inp, h)
# calculate the loss and perform backprop
loss = criterion(output.squeeze(), target.long())
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 24 # of words in a sequence
# Batch Size
batch_size = 128
print("Batches per Epoch: {}".format(len(int_text)//(batch_size*sequence_length)))
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 16
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 300
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = (len(int_text)//(batch_size*sequence_length))//2
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 16 epoch(s)...
Epoch: 1/16 Loss: 6.2943435833371915
Epoch: 1/16 Loss: 5.434529814226874
Epoch: 2/16 Loss: 4.898563677689125
Epoch: 2/16 Loss: 4.854149961471558
Epoch: 3/16 Loss: 4.632449472361597
Epoch: 3/16 Loss: 4.544691020044787
Epoch: 4/16 Loss: 4.344787165214275
Epoch: 4/16 Loss: 4.3828199287940715
Epoch: 5/16 Loss: 4.162998985422068
Epoch: 5/16 Loss: 4.1920594938870135
Epoch: 6/16 Loss: 3.998603886571424
Epoch: 6/16 Loss: 4.035136530317109
Epoch: 7/16 Loss: 3.848791500617718
Epoch: 7/16 Loss: 3.8965666754492396
Epoch: 8/16 Loss: 3.701201585243488
Epoch: 8/16 Loss: 3.7647711572975946
Epoch: 9/16 Loss: 3.5574668982933306
Epoch: 9/16 Loss: 3.632318449020386
Epoch: 10/16 Loss: 3.424426614827123
Epoch: 10/16 Loss: 3.5017578552509177
Epoch: 11/16 Loss: 3.3006941400725265
Epoch: 11/16 Loss: 3.3676699473940093
Epoch: 12/16 Loss: 3.1694899230167786
Epoch: 12/16 Loss: 3.229997391536318
Epoch: 13/16 Loss: 3.056047296524048
Epoch: 13/16 Loss: 3.094313822121456
Epoch: 14/16 Loss: 2.914596128463745
Epoch: 14/16 Loss: 2.9927004649721343
Epoch: 15/16 Loss: 2.80605091226512
Epoch: 15/16 Loss: 2.8572575881563385
Epoch: 16/16 Loss: 2.684618122824307
Epoch: 16/16 Loss: 2.756876294366245
Model Trained and Saved
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:**Started with sequence of 16 (since scripts are generally short passages of dialog), batch of 128 due to 8gb available on GPU, and 2 layers. No Dropout since the RNN is degenrating data.Initially 16 epochs with sequence length of 16 and learning rate of 0.001 converged below a loss of 3.5 very quickly (less than 16 epochs).Tested with 1 layer LSTM, sequence length of 16, and learning rate of 0.01, also converged quickly (about 5 epochs).Tested with 1 layer LSTM, sequence length of 8, and learning rate of 0.01, also converged slower than sequence length of 16. Seems like the shorter sequences don't capture different characters well.Tried 1 layer LSTM (to make a lighter model than compared to 2 or more layers), with sequence length of 24 words and learning rate of 0.001.Tried 2 layer LSTM, with sequence length of 24 words and learning rate of 0.001, it was slower to converge than 1 layer due to increased parameters, but the output produced is cleaner than a 1 layer LSTM.The hidden_dims were kept fixed based on the recommendations in the lectures (between 200-500). --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
EN : NOTE to correctly load the RNN (due to Pickle issues), need to run all the RNN definition cells above.
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# To solve GPU issues, moved whole rnn to CPU, EN: I highly recommend this action.
# Further, the 'train_on_gpu' variable is very poorly used in the project, it should be a
# switchable parameter for the project.
rnn.cpu()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
train_on_gpu = False
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word]-1, gen_length)
print(generated_script)
train_on_gpu = True # this is necessary to avoid issues with cuda assertion errors
###Output
jerry:, we were having to a pair of the 70's.(turns to himself) so i want it to be in the apartment.
jerry: yeah?
kramer: well, i got to be able in days!
george: well, i was gonna be in the street. i got to go.
elaine:(to elaine) well- what are you talking?
george:(smiling) : what?
george:(to elaine) well, i told me, you think i think it might have a woman, i'm just gonna like a other fan and then return a cashier of the middle of *fruit* and leaving and and then the only thing by the end out of forty--- huh!(jerry coyly by a little starts away)
jerry:(trying to the phone) : divorce on.
jerry: well, i was just out of this woman. i was having a bundle for a week for that.(inaudible at george) what's it on the parents like a little deal!
jerry: no, no.
elaine: well, he did it was good.
jerry:(smiling) there's a good match.
jerry: well, you know, you're not even going to do that.
jerry: i know.
kramer: yeah, yeah, yeah.
kramer:(to jerry) oh, i don't know i don't know, i think i don't know how it?
jerry: no, i was just like to my house, i know, i think you were saying!
george: yeah.
jerry:(trying to the podium of jerry's side of the middle) you didn't get the same bull.
george: well, you think i was gonna go to a deal!
jerry: yeah!
jerry: what?
kramer: hey!
george: i didn't know what about that?
george:(to jerry) well, i don't even know.
george:(smiling) well,
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
counts = Counter(text)
vocab = sorted(counts, key = counts.get, reverse= True)
vocab_to_int = {word: ii for ii,word in enumerate(vocab)}
int_to_vocab = {ii:word for word,ii in vocab_to_int.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
from string import punctuation
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
dictionary = {
"." : "||Period||",
"," : "||Comma||",
'"' : "||QuotationMark||",
";" : "||Semicolon||",
"!" : "||Exclamationmark||",
"?" : "||Questionmark||",
"(" : "||LeftParentheses||",
")" : "||RightParentheses||",
"-" : "||Dash||",
"\n": "||Return||"
}
return dictionary
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
feature_tensors =np.array([words[i:i+sequence_length] for i in range(len(words)-sequence_length)])
target_tensors =np.array([words[sequence_length + i] for i in range(len(words)-sequence_length)])
# for i in range(len(words)-sequence_length+1):
# x = words[i:i+sequence_length]
# y = words[sequence_length + i]
# feature_tensor.append(x)
# target_tensor.append(y)
feature_tensors = torch.from_numpy(feature_tensors)
target_tensors = torch.from_numpy(target_tensors)
data = TensorDataset(feature_tensors, target_tensors)
data_loader = torch.utils.data.DataLoader(data,
batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]])
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.hidden_dim = hidden_dim
self.n_layers = n_layers
# define model layers
self.embed = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, num_layers=n_layers, dropout=dropout, batch_first=True)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
embed_out = self.embed(nn_input)
lstm_out, hidden = self.lstm(embed_out, hidden)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
out = self.fc(lstm_out)
# reshape into (batch_size, seq_length, output_size)
out = out.view(batch_size, -1, self.output_size)
# get last batch
out = out[:, -1]
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
weight = next(self.parameters()).data
# initialize hidden state with zero weights, and move to GPU if available
if(train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if(train_on_gpu):
inp,target = inp.cuda(),target.cuda()
hidden = tuple([each.data for each in hidden])
rnn.zero_grad()
output, hidden = rnn(inp, hidden)
# perform backpropagation and optimization
loss = criterion(output.squeeze(), target)
loss.backward()
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 2000
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 4.903444903254509
Epoch: 1/10 Loss: 4.502190977096558
Epoch: 1/10 Loss: 4.364594425082207
Epoch: 2/10 Loss: 4.12398235161082
Epoch: 2/10 Loss: 3.946532906651497
Epoch: 2/10 Loss: 3.9078828673362733
Epoch: 3/10 Loss: 3.8131973517788262
Epoch: 3/10 Loss: 3.7099602723121645
Epoch: 3/10 Loss: 3.7068991522789
Epoch: 4/10 Loss: 3.647635696551823
Epoch: 4/10 Loss: 3.562523124575615
Epoch: 4/10 Loss: 3.566692313551903
Epoch: 5/10 Loss: 3.529239155781233
Epoch: 5/10 Loss: 3.462532285451889
Epoch: 5/10 Loss: 3.462529283285141
Epoch: 6/10 Loss: 3.4430943972014867
Epoch: 6/10 Loss: 3.382022962450981
Epoch: 6/10 Loss: 3.3935736951828
Epoch: 7/10 Loss: 3.376769053490325
Epoch: 7/10 Loss: 3.3211654245853426
Epoch: 7/10 Loss: 3.324988476872444
Epoch: 8/10 Loss: 3.3234892871974897
Epoch: 8/10 Loss: 3.2673957041502
Epoch: 8/10 Loss: 3.2787898918390272
Epoch: 9/10 Loss: 3.277423346311647
Epoch: 9/10 Loss: 3.220760303378105
Epoch: 9/10 Loss: 3.2305056772232055
Epoch: 10/10 Loss: 3.248893229316565
Epoch: 10/10 Loss: 3.183899417877197
Epoch: 10/10 Loss: 3.1929726628065107
Model Trained and Saved
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** I tried with different sequence_lengths like 200,5,100,10 and noticed when sequence_length is 10 it is working greatly.For hidden_dim and n_layers also i did hit and trial with various values and later found that it works nicely with values 256 & 2.Hence, chosen these values. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:38: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
# collections.counter() counts the occurance of words in a text and creates a dictionary with each word
# as keys and number of occurances as values
counts = Counter(text)
# next, we sort the dictionary based on values with the highest occurance being first
vocab = sorted(counts, key=counts.get, reverse=True)
# by iterating over all words in vocab, we create a new dictionary with their index values starting from 1
vocab_to_int = {word:ii for ii, word in enumerate (vocab,1)}
# next we need to swap values with keys in this vocab_to_int dictionary
# for each tuple in this dictionary, we take the value of this tuple, make it key.
# and we assign the key as value
int_to_vocab = {value: key for key, value in vocab_to_int.items()}
#finally, we return the tuple of dictionaries
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
punct_to_token = {
'.':'||period||',
',':'||comma||',
'"':'||quotation_mark||',
';':'||semicolon||',
'!':'||exclamation_mark||',
'?':'||question_mark||',
'(':'||left_parantheses||',
')':'||right_parantheses||',
'-':'||dash||',
'\n':'||return||'
}
return punct_to_token
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# how many sequences are there inside all the words?
total_sequences = len(words)//sequence_length
input_text = []
target_text = []
# now we will iterate over all sequences and create the corresponding pairs for input and target features (texts)
for i in range(0,total_sequences):
# the end index of the ith sequence
end = i + sequence_length
# now appending ith sequence to the input_text
input_text.append(words[i:end])
# target_text corresponding to the ith sequence is just the next word right after sequence
target_text.append(words[end])
# now we are creating the tensors, which I'll call input_tensors and target_tensors from now on
input_tensors = torch.LongTensor(input_text)
target_tensors = torch.LongTensor(target_text)
print(len(input_tensors), len(target_tensors))
data = TensorDataset(input_tensors, target_tensors)
# shuffling the data set and batching (without shuffling, the training might be biased)
data_loader = torch.utils.data.DataLoader(data, shuffle=True, batch_size=batch_size)
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
loader = batch_data(int_text,4,20)
feature,target = next(iter(loader))
print(feature)
print(target)
###Output
223027 223027
tensor([[ 1, 15, 5, 28],
[ 9374, 2, 37, 622],
[ 1744, 70, 74, 237],
[ 79, 2, 1, 1],
[ 85, 66, 1799, 16],
[ 31, 2, 1, 1],
[ 153, 39, 20, 223],
[ 880, 7653, 51, 105],
[ 1, 1, 17, 19],
[ 179, 189, 16, 2],
[ 1, 1, 8, 38],
[ 1, 816, 19, 59],
[ 6, 196, 36, 42],
[ 9, 52, 16, 114],
[ 51, 59, 16, 2],
[ 44, 13, 38, 7222],
[ 1, 15, 12, 46],
[ 22, 7, 1376, 13],
[ 37, 4262, 2, 1],
[ 5, 112, 71, 6]])
tensor([ 3, 21, 4, 17, 22, 17, 21, 70, 59,
1, 66, 1986, 45, 47, 1, 5, 163, 752,
1, 459])
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(500)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print(sample_y.shape)
print(sample_y)
###Output
100 100
torch.Size([10, 5])
tensor([[ 40, 41, 42, 43, 44],
[ 14, 15, 16, 17, 18],
[ 12, 13, 14, 15, 16],
[ 97, 98, 99, 100, 101],
[ 35, 36, 37, 38, 39],
[ 89, 90, 91, 92, 93],
[ 95, 96, 97, 98, 99],
[ 75, 76, 77, 78, 79],
[ 98, 99, 100, 101, 102],
[ 18, 19, 20, 21, 22]])
torch.Size([10])
tensor([ 45, 19, 17, 102, 40, 94, 100, 80, 103, 23])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define model layers
self.dropout = nn.Dropout(0.25)
self.fc = nn.Linear(hidden_dim, output_size)
# embedding and LSTM layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
#initialize the weights, might want to add a function later
# self.init_weights()
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
embedding = self.embedding(nn_input)
lstm, hidden = self.lstm(embedding,hidden)
lstm_out = lstm.contiguous().view(-1,self.hidden_dim)
out = self.dropout(lstm_out)
out = self.fc(out)
# sigmoid out functions
sig_out = out.view(batch_size, -1, self.output_size)
sig_out = sig_out[:, -1]
# return one batch of output word scores and the hidden state
return sig_out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weigth = next(self.parameters()).data
if (train_on_gpu):
hidden = (weigth.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(), weigth.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weigth.new(self.n_layers, batch_size, self.hidden_dim).zero_(), weigth.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
train_on_gpu = torch.cuda.is_available()
if (train_on_gpu):
rnn.cuda()
inp,target = inp.cuda(), target.cuda()
# zero accummulated gradients
rnn.zero_grad()
# perform backpropagation and optimization
# output, hidden = rnn(inp,hidden)
# loss = criterion(output.squeeze(),target)
# The code above was causing the loss.backward() result in an error
# that i could not fix easily. When i turned retained_graph to True and
# called the loss.backward(retain_graph=True), then another error was
# introduced. Thus , i searched for a way to run loss.backward() without
# the retain_graph parameter turned on.
hidden = tuple([each.data for each in hidden])
output, hidden = rnn(inp, hidden)
loss = criterion(output, target.long())
loss.backward()
# 'clip_grad_norm' prevents the exploiding gradient problem
nn.utils.clip_grad_norm_(rnn.parameters(), 4)
optimizer.step()
# return the average loss of a batch and the hidden state produced
return loss.item(),hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 64
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 5
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int) + 1
# Output size
output_size = len(vocab_to_int)
# Embedding Dimension
embedding_dim = 300
# Hidden Dimension
hidden_dim = 600
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 5 epoch(s)...
Epoch: 1/5 Loss: 5.331203703403473
Epoch: 1/5 Loss: 4.7722836451530455
Epoch: 2/5 Loss: 4.427083381746811
Epoch: 2/5 Loss: 4.24037056684494
Epoch: 3/5 Loss: 4.093416558401315
Epoch: 3/5 Loss: 3.9671500473022463
Epoch: 4/5 Loss: 3.83874550511829
Epoch: 4/5 Loss: 3.7339776611328124
Epoch: 5/5 Loss: 3.5769182065833456
Epoch: 5/5 Loss: 3.515318180561066
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** Based on previous experience in dog breed model training, I wanted to start with the simplest possible hyperparameters for quickest training. Quickest training allows to try out various hyperparameter ranges given a time contrainst on the project. I wanted to start with a short sequence length, 10, during first training. I will also try out 5 and 15. For the model to guess the next word, i think a sequence length of 10 would give a pretty good idea about the other words around this word.I kept the batch_size low to avoid any memory errors. I started with 64, I would try 32 and 128 as well.I started with 5 epochs only to finalize training as soon as possible, because sometimes, the training does not reduce the loss at all so a manual interrupt becomes necessary. Once i feel that the other parameters are well tuned, then I'll try with higher number of epochs to get the loss further down.I set the learning rate at 0.001 for the first training attempts.n_layers i choose to be as simplistic as possible with 2 layers, based on previous experience with dog breed classifier. I'll start with the most simple model and then increase complexity as necessary to reduce the loss even further.I have chosen the embedding_dim at 300, and hidden_dim at 600. In the lecture videos for generating text out of the Anna Karenina novel, we used similar values between 200-600. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:43: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
_____no_output_____
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
# return tuple
return (None, None)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
_____no_output_____
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
_____no_output_____
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# return a dataloader
return None
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
_____no_output_____
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
# define model layers
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# return one batch of output word scores and the hidden state
return None, None
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
_____no_output_____
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
# perform backpropagation and optimization
# return the loss over a batch and the hidden state produced by our model
return None, None
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
_____no_output_____
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = # of words in a sequence
# Batch Size
batch_size =
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs =
# Learning Rate
learning_rate =
# Model parameters
# Vocab size
vocab_size =
# Output size
output_size =
# Embedding Dimension
embedding_dim =
# Hidden Dimension
hidden_dim =
# Number of RNN Layers
n_layers =
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
_____no_output_____
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
_____no_output_____
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_count = Counter(text)
sorted_words = sorted(word_count, key=word_count.get, reverse=True)
vocab_to_int = {word: i for i, word in enumerate(sorted_words)}
int_to_vocab = {i: word for word, i in vocab_to_int.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
pun_dict = dict()
pun_dict['.'] = '||Period||'
pun_dict[','] = '||Comma||'
pun_dict['"'] = '||Quotation_mark||'
pun_dict[';'] = '||Semicolon||'
pun_dict["!"] = '||Exclamation_mark||'
pun_dict["?"] = '||Question_mark||'
pun_dict["("] = '||Left_parentheses||'
pun_dict[")"] = '||Right_parenthese||'
pun_dict["-"] = '||Dash||'
pun_dict["\n"] = '||Return||'
return pun_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
num_batches = len(words) // batch_size
word_batches = words[: num_batches*batch_size]
x, y = [], []
for i in range(len(word_batches)-sequence_length):
x.append(word_batches[i:sequence_length+i])
if len(words) > sequence_length+i:
y.append(words[sequence_length+i])
# If there is no more words to be predicted then add a period to be predicted
if len(x) != len(y):
y.append(vocab_to_int['||period||'])
x, y = np.array(x), np.array(y)
data = TensorDataset(torch.from_numpy(x), torch.from_numpy(y))
data_loader = torch.utils.data.DataLoader(data, shuffle=True, batch_size=batch_size)
return data_loader
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 20, 21, 22, 23, 24],
[ 25, 26, 27, 28, 29],
[ 28, 29, 30, 31, 32],
[ 34, 35, 36, 37, 38],
[ 7, 8, 9, 10, 11],
[ 27, 28, 29, 30, 31],
[ 21, 22, 23, 24, 25],
[ 40, 41, 42, 43, 44],
[ 11, 12, 13, 14, 15],
[ 41, 42, 43, 44, 45]])
torch.Size([10])
tensor([ 25, 30, 33, 39, 12, 32, 26, 45, 16, 46])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.hidden_dim = hidden_dim
self.n_layers = n_layers
# define model layers
self.embed = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# Embedding layer
batch_size = nn_input.size(0)
embeds = self.embed(nn_input)
# LSTM layer
lstm_output, hidden = self.lstm(embeds,hidden)
lstm_output.contiguous().view(-1, self.hidden_dim)
# Linear layer
output = self.fc(lstm_output)
# reshape into (batch_size, seq_length, output_size)
output = output.view(batch_size, -1, self.output_size)
# get last batch
out = output[:, -1]
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if(train_on_gpu):
rnn.cuda()
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
hidden = tuple([each.data for each in hidden])
# zero accumulated gradients
rnn.zero_grad()
# get the output from the model
output, hidden = rnn(inp, hidden)
# calculate the loss and perform backprop
loss = criterion(output, target)
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = len(vocab_to_int)
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 250
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 1500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 5.004257010777791
Epoch: 1/10 Loss: 4.423942148844401
Epoch: 1/10 Loss: 4.273961540381114
Epoch: 1/10 Loss: 4.18605407222112
Epoch: 2/10 Loss: 4.026157460201115
Epoch: 2/10 Loss: 3.9250682214101156
Epoch: 2/10 Loss: 3.918394592920939
Epoch: 2/10 Loss: 3.8999721196492514
Epoch: 3/10 Loss: 3.793609598950397
Epoch: 3/10 Loss: 3.732712996323903
Epoch: 3/10 Loss: 3.7434992198944093
Epoch: 3/10 Loss: 3.7634035917917887
Epoch: 4/10 Loss: 3.6680665175570852
Epoch: 4/10 Loss: 3.6139183773994445
Epoch: 4/10 Loss: 3.638539548079173
Epoch: 4/10 Loss: 3.652538258075714
Epoch: 5/10 Loss: 3.5700433661910664
Epoch: 5/10 Loss: 3.535647804896037
Epoch: 5/10 Loss: 3.5342669665018716
Epoch: 5/10 Loss: 3.5724144185384112
Epoch: 6/10 Loss: 3.5065568181258935
Epoch: 6/10 Loss: 3.460029465675354
Epoch: 6/10 Loss: 3.490467675526937
Epoch: 6/10 Loss: 3.5011508835156757
Epoch: 7/10 Loss: 3.428452478911927
Epoch: 7/10 Loss: 3.4065818718274437
Epoch: 7/10 Loss: 3.435428986708323
Epoch: 7/10 Loss: 3.466533782482147
Epoch: 8/10 Loss: 3.385046666406542
Epoch: 8/10 Loss: 3.370145404020945
Epoch: 8/10 Loss: 3.3879157797495525
Epoch: 8/10 Loss: 3.419543927033742
Epoch: 9/10 Loss: 3.342440372251421
Epoch: 9/10 Loss: 3.311663731098175
Epoch: 9/10 Loss: 3.3471543645858763
Epoch: 9/10 Loss: 3.3860708090464273
Epoch: 10/10 Loss: 3.3088166061637856
Epoch: 10/10 Loss: 3.290230896313985
Epoch: 10/10 Loss: 3.323513492266337
Epoch: 10/10 Loss: 3.3555690369606017
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** Yes I tried different sequence_lengths and I found that when the sequence_length increases the loss decreases faster but it took too long time, I choosed sequence_length = 10 which is good enough for our task, also I tried different numbers of hidden_dim and n_layers and found that when we increase them the loss is decreased but the time to train is incresed. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:42: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
vocab_to_int = {char:i for (i,char) in enumerate(set(text))}
int_to_vocab = {v:k for (k,v) in vocab_to_int.items()}
# return tuple
return (vocab_to_int,int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
tokens = {
'.': '||period||',
',': '||comma||',
'"': '||quotation_mark||',
';': '||semicolon||',
'!': '||exclamation_mark||',
'?': '||question_mark||',
'(': '||left_parentheses||',
')': '||right_Parentheses||',
'-': '||dash||',
'\n': '||return||'
}
return tokens
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
features = []
target = []
# return a dataloader
for i in range(len(words) - sequence_length):
features.append(words[i:i+sequence_length])
target.append(words[i+sequence_length])
data = TensorDataset(torch.tensor(features),torch.tensor(target))
data_loader = torch.utils.data.DataLoader(data,shuffle=True,batch_size=batch_size,num_workers=8)
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
!curl -o workspace_utils.py https://s3.amazonaws.com/video.udacity-data.com/topher/2018/May/5b0dea96_workspace-utils/workspace-utils.py
###Output
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1540 100 1540 0 0 8782 0 --:--:-- --:--:-- --:--:-- 10065
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
# define model layers
self.input_size = vocab_size
self.hidden_dim = hidden_dim
self.output_size = output_size
self.n_layers = n_layers
self.dropout = dropout
self.embed = nn.Embedding(vocab_size,embedding_dim)
self.lstm = nn.LSTM(embedding_dim,hidden_dim,n_layers,dropout=dropout,batch_first = True)
self.fc = nn.Linear(hidden_dim,output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
embed_output = self.embed(nn_input)
lstm_out,hidden = self.lstm(embed_output,hidden)
lstm_out = lstm_out.contiguous().view(-1,self.hidden_dim)
out = self.fc(lstm_out)
out = out.view(batch_size,-1,self.output_size)
out=out[:,-1]
# return one batch of output word scores and the hidden state
return out,hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
weight =next(self.parameters()).data
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
if torch.cuda.is_available():
hidden = (weight.new(self.n_layers,batch_size,self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers,batch_size,self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers,batch_size,self.hidden_dim).zero_(),
weight.new(self.n_layers,batch_size,self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
if torch.cuda.is_available():
inp,target = inp.cuda(),target.cuda()
# move data to GPU, if available
hidden = tuple([i.data for i in hidden])
rnn.zero_grad()
out,hidden = rnn(inp,hidden)
loss = criterion(out,target)
loss.backward()
nn.utils.clip_grad_norm_(rnn.parameters(),5)
# perform backpropagation and optimization
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tqdm import tqdm
from workspace_utils import active_session
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in tqdm(range(1, n_epochs + 1)):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 32 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 30
# Learning Rate
learning_rate = 0.0005
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 256
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 3
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
with active_session():
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
!ls
###Output
data helper.py __pycache__
dlnd_tv_script_generation.ipynb preprocess.p trained_rnn.pt
dlnd_tv_script_generation-zh.ipynb problem_unittests.py workspace_utils.py
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here)I tried to have all parameters as multiples of 2. Primary hyper parameter that helped make the model converge faster was the learning rate - reduced it to 0.0005 instead of 0.001 and 0.005. Additionally tried with models with 512 embed and 512 hidden dims and it took almost 1.5 times more estimating around 12 hours and with my limited compute time wouldnt be able to complete. I kept batch size to 128 so that I will be able to try our more complex models. Understood from the lectures that going more than 3 layers wouldnt add much value. I got loss ~3.5 with 20 epochs with model with 512 embed layer, the convergence in 256 was slightly quicker. My computer turned off while running the model so had to reconnect. final loss after 31(30+1 below) is ~2.9.
###Code
with active_session():
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, 1, show_every_n_batches)
###Output
0%| | 0/1 [00:00<?, ?it/s]
###Markdown
--- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
jerry: over.
george:(pause) well, i was a very nervous electronics. i don't know if you can do that.
jerry: oh, no...
elaine: no, i don't have to do anything.
elaine: oh!
jerry: you got a date?
kramer: no, i don't want it.
jerry: i don't know, but i don't have a massage.
george: what are you talking about?
jerry: i think it's dutch. i was thinking about it, so i can't stand out of the way to the end of the night.
elaine: i don't know, maybe i was gonna get it out, i have no idea...
george: no, you can't believe that, you know, i can't do that.(kramer is speechless)
jerry:(to the phone) what are you doing here, huh?
george: oh, no, no, no, no. i got it, and i was thinking of the game. i mean, it's not a big thing i am.(to george) i have to tell you, i don't know.
jerry:(jokingly hits his hand) oh, i don't know.(kramer throws his head up and down for the window, and starts to move on).
elaine:(to jerry) what do you think?
frank: you know what i mean, i was in the shop with the last one in the morning where we are? i can't believe that i was a little nervous.
susan: i know, but.
jerry: well, you know what? i was thinking, i have a very exciting time.
george: well.
jerry: you mean, the one who said that is a guy in the pool, he was a fantasy, like the milk, the shelves.
elaine: so, i guess she might have to be a good man, and i can't get out of the
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
vocab = Counter(text)
vocab_sorted = sorted(vocab, key=vocab.get, reverse=True)
vocab_to_int = {word: idx for (idx, word) in enumerate(vocab_sorted)}
int_to_vocab = {idx: word for (idx, word) in enumerate(vocab_sorted)}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
table = {
'.': '||period||',
',': '||comma||',
'"': '||quotation_mark||',
';': '||semicolon||',
'!': '||exclamation_mark||',
'?': '||question_mark||',
'(': '||left_parens||',
')': '||right_parens||',
'-': '||dash||',
'\n': '||return||'
}
return table
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# Create dataset
feature_tensors = np.array([words[i:i+sequence_length] for i in range(len(words) - sequence_length)])
target_tensors = np.array([words[i+sequence_length] for i in range(len(words) - sequence_length)])
data = TensorDataset(torch.from_numpy(feature_tensors), torch.from_numpy(target_tensors))
# Create dataloader
data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size, shuffle=True)
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[33, 34, 35, 36, 37],
[42, 43, 44, 45, 46],
[24, 25, 26, 27, 28],
[19, 20, 21, 22, 23],
[41, 42, 43, 44, 45],
[11, 12, 13, 14, 15],
[22, 23, 24, 25, 26],
[ 2, 3, 4, 5, 6],
[18, 19, 20, 21, 22]])
torch.Size([10])
tensor([ 5, 38, 47, 29, 24, 46, 16, 27, 7, 23])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,
dropout=dropout, batch_first=True)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
# Pass input through embedding
embeds = self.embedding(nn_input)
# Pass embedding through LSTM
lstm_out, hidden = self.lstm(embeds, hidden)
# Stack up the LSTM outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# Pass via FC layer
output = self.fc(lstm_out)
# reshape into (batch_size, seq_length, output_size)
output = output.view(batch_size, -1, self.output_size)
# get last batch
output = output[:, -1]
return output, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
c0 = weight.new(self.n_layers, batch_size, self.hidden_dim).zero_()
h0 = weight.new(self.n_layers, batch_size, self.hidden_dim).zero_()
if train_on_gpu:
c0, h0 = c0.cuda(), h0.cuda()
hidden = (c0, h0)
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if train_on_gpu:
inputs = inp.cuda()
target = target.cuda()
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
hidden = tuple([each.data for each in hidden])
# perform backpropagation and optimization
rnn.zero_grad()
output, hidden = rnn(inputs, hidden)
loss = criterion(output.squeeze(), target)
loss.backward()
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 64
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 20
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 512
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 20 epoch(s)...
Epoch: 1/20 Loss: 5.5208431291580204
Epoch: 1/20 Loss: 4.91288213300705
Epoch: 1/20 Loss: 4.700299118995667
Epoch: 1/20 Loss: 4.624004141330719
Epoch: 1/20 Loss: 4.540631830215454
Epoch: 1/20 Loss: 4.504076158046723
Epoch: 1/20 Loss: 4.396777099609375
Epoch: 1/20 Loss: 4.427033710956573
Epoch: 1/20 Loss: 4.35421296787262
Epoch: 1/20 Loss: 4.331726727962494
Epoch: 1/20 Loss: 4.32406603384018
Epoch: 1/20 Loss: 4.2876290249824525
Epoch: 1/20 Loss: 4.296672243118286
Epoch: 1/20 Loss: 4.25823878622055
Epoch: 1/20 Loss: 4.226593720912933
Epoch: 1/20 Loss: 4.22704678106308
Epoch: 1/20 Loss: 4.218754285335541
Epoch: 1/20 Loss: 4.194418120861053
Epoch: 1/20 Loss: 4.186532360076904
Epoch: 1/20 Loss: 4.164459554672241
Epoch: 1/20 Loss: 4.14344580745697
Epoch: 1/20 Loss: 4.109210339069366
Epoch: 1/20 Loss: 4.108456349372863
Epoch: 1/20 Loss: 4.137543073177338
Epoch: 1/20 Loss: 4.10374250125885
Epoch: 1/20 Loss: 4.134370882987976
Epoch: 1/20 Loss: 4.172676734447479
Epoch: 2/20 Loss: 4.039641259704281
Epoch: 2/20 Loss: 3.9448425426483156
Epoch: 2/20 Loss: 3.9628278641700745
Epoch: 2/20 Loss: 3.944454472541809
Epoch: 2/20 Loss: 3.9454182543754577
Epoch: 2/20 Loss: 3.975700412273407
Epoch: 2/20 Loss: 3.953226851463318
Epoch: 2/20 Loss: 3.9516706080436705
Epoch: 2/20 Loss: 3.933338113307953
Epoch: 2/20 Loss: 3.9367256212234496
Epoch: 2/20 Loss: 3.9608750429153443
Epoch: 2/20 Loss: 3.9500488328933714
Epoch: 2/20 Loss: 3.9420691504478453
Epoch: 2/20 Loss: 3.977406876564026
Epoch: 2/20 Loss: 3.9587139692306517
Epoch: 2/20 Loss: 3.9571049942970276
Epoch: 2/20 Loss: 3.990330852985382
Epoch: 2/20 Loss: 3.9233874478340147
Epoch: 2/20 Loss: 3.9528178162574767
Epoch: 2/20 Loss: 3.9277992687225343
Epoch: 2/20 Loss: 3.953704288005829
Epoch: 2/20 Loss: 3.939410804271698
Epoch: 2/20 Loss: 3.9229781188964843
Epoch: 2/20 Loss: 3.9774698014259338
Epoch: 2/20 Loss: 3.9426963043212893
Epoch: 2/20 Loss: 3.932018835067749
Epoch: 2/20 Loss: 3.9862265763282774
Epoch: 3/20 Loss: 3.8646559786364927
Epoch: 3/20 Loss: 3.8229868898391723
Epoch: 3/20 Loss: 3.794576765060425
Epoch: 3/20 Loss: 3.7507790188789367
Epoch: 3/20 Loss: 3.80746222114563
Epoch: 3/20 Loss: 3.784794083595276
Epoch: 3/20 Loss: 3.832058834552765
Epoch: 3/20 Loss: 3.8021265749931334
Epoch: 3/20 Loss: 3.821722647666931
Epoch: 3/20 Loss: 3.807749447822571
Epoch: 3/20 Loss: 3.791622525215149
Epoch: 3/20 Loss: 3.8489490089416503
Epoch: 3/20 Loss: 3.8285027027130125
Epoch: 3/20 Loss: 3.845472381591797
Epoch: 3/20 Loss: 3.8354111852645874
Epoch: 3/20 Loss: 3.8234596433639525
Epoch: 3/20 Loss: 3.828839545726776
Epoch: 3/20 Loss: 3.881803472518921
Epoch: 3/20 Loss: 3.8805883417129516
Epoch: 3/20 Loss: 3.871351625919342
Epoch: 3/20 Loss: 3.84910764169693
Epoch: 3/20 Loss: 3.8857558341026306
Epoch: 3/20 Loss: 3.8753841009140015
Epoch: 3/20 Loss: 3.8907879128456115
Epoch: 3/20 Loss: 3.896808834075928
Epoch: 3/20 Loss: 3.8795574893951414
Epoch: 3/20 Loss: 3.883176497936249
Epoch: 4/20 Loss: 3.8085179488879803
Epoch: 4/20 Loss: 3.678759519577026
Epoch: 4/20 Loss: 3.746946903705597
Epoch: 4/20 Loss: 3.7356966819763184
Epoch: 4/20 Loss: 3.6670956602096556
Epoch: 4/20 Loss: 3.740797432422638
Epoch: 4/20 Loss: 3.742275369167328
Epoch: 4/20 Loss: 3.7004221696853636
Epoch: 4/20 Loss: 3.745614948272705
Epoch: 4/20 Loss: 3.7330620369911194
Epoch: 4/20 Loss: 3.7421479721069337
Epoch: 4/20 Loss: 3.774201177597046
Epoch: 4/20 Loss: 3.769252538204193
Epoch: 4/20 Loss: 3.761264575958252
Epoch: 4/20 Loss: 3.7626648359298707
Epoch: 4/20 Loss: 3.7849260969161986
Epoch: 4/20 Loss: 3.785734745502472
Epoch: 4/20 Loss: 3.788772799015045
Epoch: 4/20 Loss: 3.780445031642914
Epoch: 4/20 Loss: 3.789897382259369
Epoch: 4/20 Loss: 3.848272988796234
Epoch: 4/20 Loss: 3.817666428565979
Epoch: 4/20 Loss: 3.8253357820510865
Epoch: 4/20 Loss: 3.8366506061553953
Epoch: 4/20 Loss: 3.8171124124526976
Epoch: 4/20 Loss: 3.8339194841384887
Epoch: 4/20 Loss: 3.8384078378677366
Epoch: 5/20 Loss: 3.7361603872463727
Epoch: 5/20 Loss: 3.660498927593231
Epoch: 5/20 Loss: 3.6444961400032043
Epoch: 5/20 Loss: 3.658623426437378
Epoch: 5/20 Loss: 3.6547256813049316
Epoch: 5/20 Loss: 3.6665225257873537
Epoch: 5/20 Loss: 3.6726128215789795
Epoch: 5/20 Loss: 3.685898305416107
Epoch: 5/20 Loss: 3.6918712496757506
Epoch: 5/20 Loss: 3.6636448354721067
Epoch: 5/20 Loss: 3.6928962898254394
Epoch: 5/20 Loss: 3.704521493434906
Epoch: 5/20 Loss: 3.6979769144058228
Epoch: 5/20 Loss: 3.6839334845542906
Epoch: 5/20 Loss: 3.7423541111946106
Epoch: 5/20 Loss: 3.7401878938674926
Epoch: 5/20 Loss: 3.773093376159668
Epoch: 5/20 Loss: 3.750902168750763
Epoch: 5/20 Loss: 3.746588635444641
Epoch: 5/20 Loss: 3.776627181529999
Epoch: 5/20 Loss: 3.77808452129364
Epoch: 5/20 Loss: 3.7232019176483155
Epoch: 5/20 Loss: 3.8135217542648316
Epoch: 5/20 Loss: 3.8073607091903687
Epoch: 5/20 Loss: 3.7771801495552064
Epoch: 5/20 Loss: 3.7659069757461547
Epoch: 5/20 Loss: 3.8139733572006227
Epoch: 6/20 Loss: 3.6846436890550316
Epoch: 6/20 Loss: 3.5973926548957826
Epoch: 6/20 Loss: 3.6004745759963988
Epoch: 6/20 Loss: 3.6476523299217223
Epoch: 6/20 Loss: 3.6226420345306396
Epoch: 6/20 Loss: 3.660858807563782
Epoch: 6/20 Loss: 3.653540452957153
Epoch: 6/20 Loss: 3.648666620731354
Epoch: 6/20 Loss: 3.6672579278945925
Epoch: 6/20 Loss: 3.6592908339500427
Epoch: 6/20 Loss: 3.6385306553840637
Epoch: 6/20 Loss: 3.6231074204444886
Epoch: 6/20 Loss: 3.6737765488624574
Epoch: 6/20 Loss: 3.6823623933792113
Epoch: 6/20 Loss: 3.706289093017578
Epoch: 6/20 Loss: 3.7184004950523377
Epoch: 6/20 Loss: 3.7118902163505556
Epoch: 6/20 Loss: 3.6875153017044067
Epoch: 6/20 Loss: 3.7069093446731567
Epoch: 6/20 Loss: 3.7170572218894957
Epoch: 6/20 Loss: 3.7417512273788454
Epoch: 6/20 Loss: 3.732189100265503
Epoch: 6/20 Loss: 3.7244850053787233
Epoch: 6/20 Loss: 3.712848754405975
Epoch: 6/20 Loss: 3.7379821271896363
Epoch: 6/20 Loss: 3.739783139228821
Epoch: 6/20 Loss: 3.7468568572998047
Epoch: 7/20 Loss: 3.6612618134552424
Epoch: 7/20 Loss: 3.5675605010986327
Epoch: 7/20 Loss: 3.5754138979911803
Epoch: 7/20 Loss: 3.5904976992607116
Epoch: 7/20 Loss: 3.599379881858826
Epoch: 7/20 Loss: 3.5901567082405093
Epoch: 7/20 Loss: 3.635919868469238
Epoch: 7/20 Loss: 3.6437377524375916
Epoch: 7/20 Loss: 3.5973007946014404
Epoch: 7/20 Loss: 3.6448113861083984
Epoch: 7/20 Loss: 3.6121344809532165
Epoch: 7/20 Loss: 3.6772372198104857
Epoch: 7/20 Loss: 3.6380057425498964
Epoch: 7/20 Loss: 3.63852068567276
Epoch: 7/20 Loss: 3.663372646331787
Epoch: 7/20 Loss: 3.681339054107666
Epoch: 7/20 Loss: 3.7027099046707153
Epoch: 7/20 Loss: 3.6768950743675233
Epoch: 7/20 Loss: 3.6628886275291443
Epoch: 7/20 Loss: 3.6745788402557373
Epoch: 7/20 Loss: 3.682631419658661
Epoch: 7/20 Loss: 3.7028126711845397
Epoch: 7/20 Loss: 3.6785551300048827
Epoch: 7/20 Loss: 3.7123553137779237
Epoch: 7/20 Loss: 3.7270219249725343
Epoch: 7/20 Loss: 3.714261393547058
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here)* `sequence_length`. This value determines what portion of the scripts the network sees at each time. It's important that it's not too small, so that the network can learn the context surround each word. I first tried with a value of 100, but soon I realized that it was too big - the network was not powerful enough to learn a 100-word context! It simply got stuck at a training loss of about 4.5. Then I lowered it to 10, more or less the average length of a sentence from each person, the network managed to get the loss below 3.5. The results at test time were pretty reasonable as well so I kept this value. A higher value would allow the network to learn more complex sentences, but it would require more predictive power as well as longer training time.* `n_layers`. At first I used 1 for simplicity, but PyTorch complained about it due to using dropout for the LSTM component; increasing the number to 2 removed the warning and the training results were successful. Typically the value used here is 1-3, according to the lectures.* `hidden_dim`. Following the recommendations from the lectures, previous projects and the Knowledge Hub, I used 256. Using powers of 2 is usually recommended for layer sizes for faster training. This value worked at the first attempt so I didn't try changing it. Perhaps a smaller value like 128 could speed up training without too much loss in accuracy.* `embedding_dim`. Similar to other projects, I used a power of 2, in this case 512, which was in the recommended range. Given that the input vocabulary is about 20.000 words, it felt a dimensionality of 512 would have enough power to extract good features out of the 20.000 words. A smaller dimension perhaps would not capture all the required features for so many words.* `batch_size`. As large as it fitted in my GPU memory (64). A larger batch size lets the network see more inputs at once and better update the weights.* `epochs`. I first tried 3 and 10, but the loss didn't quite stabilize at 3.5. With 20 epochs it managed to reach that recommended threshold.* `learning_rate`. I used the default 0.001 from the Adam optimizer. I did try higher and lower rates, but didn't make the training converge as expected.* `output_size`. Equals to the vocabulary size, since we want to predict the most likely word out of the vocabulary, via the CrossEntropyLoss. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
jerry: el paso, the tub is the same, and a man, and he tells her the police.
george: well i don't know if you were going to the movies.
jerry: well, it's just a natural situation. you know how 'bout the coma is the devil.
george: you know i was wondering, you know what this is?
jerry: i don't know what i do is about, i just don't want to know how much it is.
kramer: hey jerry, i'm sorry. i'm not gonna get this.
jerry: i can't believe it.
elaine:(to jerry) what about the letter?
jerry:(to jerry) i know.
jerry:(to george) you know how much i can make?
kramer: i don't know.
jerry:(pointing) hey, hey, hey!(to elaine)
elaine: hey hey, hey.
jerry: what are you doing here?
george: well i think i'm gonna be able to get it back.
george: what is this all you want?
jerry: you got some money?
george: yeah, yeah.
kramer: oh, no, you can't.
elaine: i don't want to be able to get to the movies.
jerry: i can't believe it.
kramer: oh, i can't believe you.
george: oh, you know, i can't believe you ski.
kramer: well what happened?
jerry: you know, i was just curious. i can't believe i wanted to go to the funeral and i have to go to the bathroom.
jerry:(thinking) oh, i don't want you to do it!
jerry: well you don't even like the army.
george: oh, yeah, yeah. i think that's right...
kramer: yeah, yeah, yeah? yeah. yeah, yeah.
jerry:(looking up) oh, yeah.
jerry:
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
from collections import Counter
vocab = set(text.split(' '))
vocab
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
vocab = set(text)
vocab_to_int = {v: i for i, v in enumerate(vocab)}
int_to_vocab = {i: v for v, i in vocab_to_int.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
_____no_output_____
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# return a dataloader
return None
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
_____no_output_____
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
# define model layers
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# return one batch of output word scores and the hidden state
return None, None
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
_____no_output_____
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
# perform backpropagation and optimization
# return the loss over a batch and the hidden state produced by our model
return None, None
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
_____no_output_____
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = # of words in a sequence
# Batch Size
batch_size =
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs =
# Learning Rate
learning_rate =
# Model parameters
# Vocab size
vocab_size =
# Output size
output_size =
# Embedding Dimension
embedding_dim =
# Hidden Dimension
hidden_dim =
# Number of RNN Layers
n_layers =
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
_____no_output_____
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
_____no_output_____
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
import collections
from string import punctuation
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
# don't need to remove punctuation- it's turned into its respective tokens before this text is passed to this function
#text_no_punc = []
#for word in text:
# new_word = ''.join([c for c in word if c not in punctuation])
# text_no_punc.append(new_word)
word_freq = collections.Counter(text)
sorted_vocab = sorted(word_freq, key=word_freq.get, reverse=True)
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return {".": "||period||", ",": "||comma||", '"': "||quotation_mark||", ";": "||semicolon||", "!": "||exclamation_mark||", "?": "||question_mark||", "(": "||left_parantheses||", ")": "||right_parantheses||", "-": "||dash||", "\n": "||return||"}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
print(len(vocab_to_int.keys()))
print(max(int_to_vocab.keys()))
###Output
21388
21387
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# only full batches
n_ele_in_one_batch = sequence_length * batch_size
n_batches = len(words) // n_ele_in_one_batch
words_full_batches = words[:(n_batches*n_ele_in_one_batch)]
all_features = []
all_labels = []
for i in range(0, len(words_full_batches), sequence_length):
all_features.append(words[i:i+sequence_length])
#if words is exactly a multiple of sequence_length, trying to grab the next element for labels will fail
try:
all_labels.append(words[i+sequence_length])
except:
all_labels.append(words[0])
feature_tensor = torch.Tensor(all_features)
target_tensor = torch.Tensor(all_labels)
# return a dataloader
data = TensorDataset(feature_tensor, target_tensor)
return DataLoader(data, batch_size=batch_size, shuffle=True)
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
# my test
# currently the last batch from the generator could have less than batch_size (but full sequence_length rows)
gen = iter(batch_data([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13], 4, 3))
feat, label = gen.next()
print(feat)
print(label)
###Output
tensor([[ 1., 2., 3., 4.],
[ 5., 6., 7., 8.],
[ 9., 10., 11., 12.]])
tensor([ 5., 9., 13.])
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 5., 6., 7., 8., 9.],
[ 0., 1., 2., 3., 4.],
[ 40., 41., 42., 43., 44.],
[ 35., 36., 37., 38., 39.],
[ 15., 16., 17., 18., 19.],
[ 30., 31., 32., 33., 34.],
[ 25., 26., 27., 28., 29.],
[ 20., 21., 22., 23., 24.],
[ 45., 46., 47., 48., 49.],
[ 10., 11., 12., 13., 14.]])
torch.Size([10])
tensor([ 10., 5., 45., 40., 20., 35., 30., 25., 0., 15.])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.vocab_size = vocab_size
self.output_size = output_size
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.n_layers = n_layers
self.drop_prob = dropout
# define model layers
self.embed = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, num_layers=n_layers, dropout=dropout, batch_first=True)
self.dropout_layer = nn.Dropout(0.3)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# make sure nn_input is a tensor of ints
if train_on_gpu:
# don't call cuda in the forward function
# try
# nn_input = torch.LongTensor(nn_input)
nn_input = nn_input.type(torch.cuda.LongTensor)
# hidden already .cuda if applicable in init_hidden
else:
nn_input = nn_input.type(torch.LongTensor)
batch_size = nn_input.shape[0]
out = self.embed(nn_input)
out, hidden = self.lstm(out, hidden)
# stack out of lstm
out = out.contiguous().view(-1, self.hidden_dim)
out = self.dropout_layer(out)
out = self.fc(out)
# out shape is currently batch_size * seq_len, output_size; need to take class score predictions from all the sequence length's last words
out = out.view(batch_size, -1, self.output_size)
# return one batch of output word scores and the hidden state
return out[:,-1, :], hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
hidden_1 = torch.zeros((self.n_layers, batch_size, self.hidden_dim))
cell = torch.zeros((self.n_layers, batch_size, self.hidden_dim))
if train_on_gpu:
hidden_1.cuda()
cell.cuda()
return (hidden_1, cell)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
_____no_output_____
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
hidden = tuple([each.data for each in hidden])
if train_on_gpu:
# TODO make sure target is a tensor of ints
# target = target.type(torch.cuda.LongTensor)
# hidden already .cuda if applicable in init_hidden
inp = inp.cuda()
#target = target.cuda()
#target = target.type(torch.cuda.LongTensor)
target = torch.LongTensor(target).cuda()
else:
target = target.type(torch.LongTensor)
# TODO: Implement Function
# zero the gradients
rnn.zero_grad()
next_word_pred, new_hidden = rnn(inp, hidden)
# move data to GPU, if available
# if train_on_gpu:
# next_word_pred.cuda()
# new_hidden.cuda()
# perform backpropagation and optimization
loss = criterion(next_word_pred, target)
loss.backward()
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), new_hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
_____no_output_____
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 20 # of words in a sequence
# Batch Size
batch_size = 2
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 7
# Learning Rate
learning_rate = .01
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = embedding_dim + 100
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
# TODO used to be 500
show_every_n_batches = 1
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
_____no_output_____
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
_____no_output_____
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
_____no_output_____
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
# return tuple
return (None, None)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
_____no_output_____
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
_____no_output_____
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# return a dataloader
return None
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
_____no_output_____
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
# return tuple
return (None, None)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
_____no_output_____
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
_____no_output_____
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word":```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# return a dataloader
return None
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following:```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
_____no_output_____
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of words** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output one, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of words by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
# define model layers
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# return one batch of output word scores and the hidden state
return None, None
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
_____no_output_____
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
# perform backpropagation and optimization
# return the loss over a batch and the hidden state produced by our model
return None, None
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
_____no_output_____
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = # of words in a sequence
# Batch Size
batch_size =
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs =
# Learning Rate
learning_rate =
# Model parameters
# Vocab size
vocab_size =
# Output size
output_size =
# Embedding Dimension
embedding_dim =
# Hidden Dimension
hidden_dim =
# Number of RNN Layers
n_layers =
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
_____no_output_____
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
_____no_output_____
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
vocab = sorted(set(text))
vocab_to_int = dict()
int_to_vocab = dict()
for index, word in enumerate(vocab):
vocab_to_int[word] = index
int_to_vocab[index] = word
# return tuple
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return {
".": "<period>",
",": "<comma>",
"\"": "<quotation_mark>",
";": "<semicolon>",
"!": "<exclamation_mark>",
"?": "<question_mark>",
"(": "<left_parentheses>",
")": "<right_parentheses>",
"-": "<dash>",
"\n": "<newline>"
}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
sequence_num = len(words) - sequence_length
features = list()
targets = list()
for i in range(sequence_num):
begin = i
end = begin + sequence_length
feature = words[begin:end]
features.append(feature)
target = words[end]
targets.append(target)
features = np.array(features)
features = torch.from_numpy(features)
targets = np.array(targets)
targets = torch.from_numpy(targets)
if train_on_gpu:
features = features.cuda()
targets = targets.cuda()
dataset = TensorDataset(features, targets)
# return a dataloader
return torch.utils.data.DataLoader(dataset, batch_size=batch_size, shuffle=True)
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[25, 26, 27, 28, 29],
[44, 45, 46, 47, 48],
[22, 23, 24, 25, 26],
[30, 31, 32, 33, 34],
[10, 11, 12, 13, 14],
[19, 20, 21, 22, 23],
[20, 21, 22, 23, 24],
[26, 27, 28, 29, 30],
[37, 38, 39, 40, 41],
[15, 16, 17, 18, 19]], device='cuda:0')
torch.Size([10])
tensor([30, 49, 27, 35, 15, 24, 25, 31, 42, 20], device='cuda:0')
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.hidden_dim = hidden_dim
self.n_layers = n_layers
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(
input_size=embedding_dim,
hidden_size=hidden_dim,
num_layers=n_layers,
dropout=dropout,
batch_first=True)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size()[0]
output = self.embedding(nn_input)
output, hidden = self.lstm(output, hidden)
output = output[:,-1,:]
output = self.fc(output)
# return one batch of output word scores and the hidden state
return output, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
hidden = (
torch.zeros(self.n_layers, batch_size, self.hidden_dim),
torch.zeros(self.n_layers, batch_size, self.hidden_dim)
)
if train_on_gpu:
hidden = (
hidden[0].cuda(),
hidden[1].cuda()
)
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if train_on_gpu:
inp = inp.cuda()
target = target.cuda()
# perform backpropagation and optimization
rnn.zero_grad()
output, hidden = rnn(inp, hidden)
loss = criterion(output, target)
loss.backward()
optimizer.step()
hidden = (
hidden[0].detach(),
hidden[1].detach()
)
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 56 # of words in a sequence
# Batch Size
batch_size = 128
dataset_size = len(int_text)
# data loader - do not change
train_loader = batch_data(int_text[:dataset_size], sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 30
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 400
# Hidden Dimension
hidden_dim = 512
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = (dataset_size // batch_size) // 1 - 1
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 30 epoch(s)...
Epoch: 1/30 Loss: 4.289141745572796
Epoch: 2/30 Loss: 3.7877754305370024
Epoch: 3/30 Loss: 3.5796319028879275
Epoch: 4/30 Loss: 3.4241222559695847
Epoch: 5/30 Loss: 3.297029173755632
Epoch: 6/30 Loss: 3.198483217804277
Epoch: 7/30 Loss: 3.1157518120129493
Epoch: 8/30 Loss: 3.048004728322187
Epoch: 9/30 Loss: 2.9860444950534
Epoch: 10/30 Loss: 2.9312726543698564
Epoch: 11/30 Loss: 2.8875475097102847
Epoch: 12/30 Loss: 2.8496716623926184
Epoch: 13/30 Loss: 2.815100453354601
Epoch: 14/30 Loss: 2.7826926462755326
Epoch: 15/30 Loss: 2.7564288917274107
Epoch: 16/30 Loss: 2.7320139785462256
Epoch: 17/30 Loss: 2.7068050035634603
Epoch: 18/30 Loss: 2.684290092034873
Epoch: 19/30 Loss: 2.665286581559714
Epoch: 20/30 Loss: 2.6454119196682546
Epoch: 21/30 Loss: 2.6299615289819753
Epoch: 22/30 Loss: 2.614596900031289
Epoch: 23/30 Loss: 2.5973563381923905
Epoch: 24/30 Loss: 2.5837340978762566
Epoch: 25/30 Loss: 2.570698447079685
Epoch: 26/30 Loss: 2.5586750670542657
Epoch: 27/30 Loss: 2.545758034239063
Epoch: 28/30 Loss: 2.5345035795160338
Epoch: 29/30 Loss: 2.5233328167960094
Epoch: 30/30 Loss: 2.514612071242834
Model Trained and Saved
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:**At the very first trainings, I used a stripped dataset by selecting the first 10000 words of the original dataset. It helped me to speed up the training iterations and see, whether the designed network can be trained at all. I have played with the number of layers and the hidden dimension. After, I switched to the full sized dataset, but limited the number of training epochs to 20~30. Then I realized, that the difference in training speed can be seen after 5 epochs. Since then, I performed several 5 epochs long training sessions, until found parameters for a "good" training speed. After I found all parameters, I set the epoch number to 30 as I noticed that usually by that time the training loss settles and doesn't change significantly.I saved the training loss values between sessions, so that I can see, how different parameters affect the training speed. Below you can find a graph with the training loss history for several training sessions and the values of parameters associated with each session.
###Code
import json
import matplotlib.pyplot as plt
from pprint import pprint
%matplotlib inline
with open("history.json", "r") as history_file:
sessions = json.load(history_file)
for session_id, session in enumerate(sessions):
if session["params"]["dataset_size"] != 10000:
print(f"Session {session_id} parameters:")
pprint(session["params"])
epoch_average_losses = list()
hist = session["history"][:5]
for epoch in hist:
epoch_length = len(epoch)
last_losses_begin = epoch_length * 9 // 10
last_losses = epoch[last_losses_begin:]
average_loss = np.mean(last_losses)
epoch_average_losses.append(average_loss)
epoch_ids = range(len(hist))
plt.plot(epoch_ids, epoch_average_losses, label=session_id)
plt.legend()
###Output
Session 0 parameters:
{'batch_size': 128,
'dataset_size': 892110,
'embedding_dim': 200,
'hidden_dim': 64,
'learning_rate': 0.001,
'n_layers': 3,
'num_epochs': 100,
'sequence_length': 56}
Session 1 parameters:
{'batch_size': 128,
'dataset_size': 892110,
'embedding_dim': 200,
'hidden_dim': 256,
'learning_rate': 0.001,
'n_layers': 3,
'num_epochs': 30,
'sequence_length': 56}
Session 2 parameters:
{'batch_size': 128,
'dataset_size': 892110,
'embedding_dim': 200,
'hidden_dim': 128,
'learning_rate': 0.001,
'n_layers': 4,
'num_epochs': 30,
'sequence_length': 56}
Session 3 parameters:
{'batch_size': 128,
'dataset_size': 892110,
'embedding_dim': 200,
'hidden_dim': 256,
'learning_rate': 0.001,
'n_layers': 4,
'num_epochs': 30,
'sequence_length': 56}
Session 4 parameters:
{'batch_size': 128,
'dataset_size': 892110,
'embedding_dim': 200,
'hidden_dim': 256,
'learning_rate': 0.001,
'n_layers': 2,
'num_epochs': 30,
'sequence_length': 56}
Session 16 parameters:
{'batch_size': 128,
'dataset_size': 892110,
'embedding_dim': 300,
'hidden_dim': 256,
'learning_rate': 0.001,
'n_layers': 2,
'num_epochs': 5,
'sequence_length': 56}
Session 17 parameters:
{'batch_size': 128,
'dataset_size': 892110,
'embedding_dim': 300,
'hidden_dim': 256,
'learning_rate': 0.001,
'n_layers': 2,
'num_epochs': 5,
'sequence_length': 16}
Session 18 parameters:
{'batch_size': 128,
'dataset_size': 892110,
'embedding_dim': 400,
'hidden_dim': 256,
'learning_rate': 0.001,
'n_layers': 2,
'num_epochs': 5,
'sequence_length': 16}
Session 19 parameters:
{'batch_size': 128,
'dataset_size': 892110,
'embedding_dim': 300,
'hidden_dim': 512,
'learning_rate': 0.001,
'n_layers': 2,
'num_epochs': 5,
'sequence_length': 56}
###Markdown
I have decided to use 10 lines of script as a context. Taking in consideration the average script line length of ~5.6, I set `sequence_length` to 56. I have tried the value of 16, which corresponds approximately to 3 lines of script: the performance has degraded a little bit, so I decided to stay with 56.I have selected the batch size of 128 to ensure the 90% utilization of GPU (reported by nvidia-smi).I have tried the embedding size of 100, 200, 300 and 400. For hidden dimension of 256, the embedding dimension of 300 gave the best loss, but for hidden dimension of 512, the best loss was achieved with the embedding dimension of 400.I have tried different combinations of hidden dimension of 64, 128, 256 and 512 and the number of LSTM layers between 1 and 4. The best training speed was achieved with 2 LSTM layers and 512 hidden units. Increasing the number of LSTM layers worsen the training speed, I guess, because of the presence of an extra dropout layer. Probably, increasing the number of hidden neurons could've improved the loss, but could've also increase the training time, that's why I decided to stop at 512.I selected the learning rate of 0.001, which is a default value of the learning rate for the Adam optimizer. From my previous experience this value of learning rate is a good starting point for the Adam optimizer for almost any model I trained. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
jerry: challenged name...(the salt broken fingers, he makes a big phony)
elaine: what is this?! i can't believe you liked that!
elaine: oh my god. you don't have a flush button.(chuckles)
jerry:(to the phone) oh, yeah.(to elaine) you know this show on the other night, and they overlook," get outta the way!" i mean, they don't allow the chaperone off. they don't even want to get the cable companies.
elaine: oh!
kramer: i got it! chew skin!
jerry: oh, i think it's worth something.
kramer: yeah, but they scored the duck will probably hit 'em.
jerry: well, what is this about?
kramer: oh, well, i gotta go to the bathroom.
jerry: you want to get me something to eat?
kramer: oh.
jerry: hey
kramer: yeah, well, i'm sure it's not a beauty.
jerry:(sarcastic) what?
kramer: what?
jerry: i don't think so. i mean, you know, maybe i'll just go down to the electric wash.
kramer: hey buddy. you ready?
jerry:(looking at the cop) i think i may not have it.
george:(leaving) woah. alright.(holds the phone back to him to be waiting at the table)
jerry: hey!
george: hey!
george: hey. hey!
jerry: hey.(sprays binaca to kramer with a napkin) thanks mate.
elaine: hi.
jerry: hi, elaine, i'm elaine.(kramer leaves. elaine waves at elaine.)
jerry: hey.
kramer: hey.
jerry: hey.
kramer:(from bathroom) hey.
kramer: hey.
jerry: hey, where's george?
kramer: yeah.
elaine: hey.
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_dict = {word: None for word in text}
int_to_vocab = {idx: word for idx, word in enumerate(word_dict.keys())}
vocab_to_int = {word: idx for idx, word in int_to_vocab.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
punctuation_token = {
'.': '||dot||',
',': '||comma||',
'-': '||dash||',
';': '||semi_colon||',
'"': '||quotation||',
'?': '||question||',
'!': '||exclamation||',
'(': '||left_paren||',
')': '||right_paren||',
'\n': '||newline||',
}
return punctuation_token
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# return number of targets available
num_targets = (len(words) - sequence_length)
# initialise feature and targets vars as two empty lists
features, target = [], []
for i in range(num_targets):
x = words[i : i+sequence_length] # get some words from the given list
y = words[i+sequence_length] # get the next word to be the target
features.append(x)
target.append(y)
feature_tensor, target_tensor = torch.from_numpy(np.array(features)), torch.from_numpy(np.array(target))
# create data
data = TensorDataset(feature_tensor, target_tensor)
# create dataloader
dataloader = DataLoader(data, batch_size=batch_size, shuffle=True)
# return a dataloader
return dataloader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[41, 42, 43, 44, 45],
[12, 13, 14, 15, 16],
[40, 41, 42, 43, 44],
[17, 18, 19, 20, 21],
[ 2, 3, 4, 5, 6],
[ 7, 8, 9, 10, 11],
[10, 11, 12, 13, 14],
[ 5, 6, 7, 8, 9],
[ 1, 2, 3, 4, 5],
[27, 28, 29, 30, 31]], dtype=torch.int32)
torch.Size([10])
tensor([46, 17, 45, 22, 7, 12, 15, 10, 6, 32], dtype=torch.int32)
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# set class variables
# embedding and LSTM layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,
dropout=dropout, batch_first=True)
# define model layers
# linear layer
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
# embeddings and lstm_out
embeds = self.embedding(nn_input.long())
lstm_output, hidden = self.lstm(embeds, hidden)
lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)
# fully-connected layer
output = self.fc(lstm_output)
# reshape into (batch_size, seq_length, output_size)
output = output.view(batch_size, -1, self.output_size)
# get last batch
out = output[:, -1]
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
weight = next(self.parameters()).data
# initialize hidden state with zero weights, and move to GPU if available
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if train_on_gpu:
inp, target = inp.cuda(), target.cuda()
hidden = tuple([e.data for e in hidden])
# reset gradients and return output
rnn.zero_grad()
output, hidden = rnn(inp, hidden)
# perform backpropagation and optimization
loss = criterion(output, target.long())
loss.backward()
# clip gradients norm to prevent exploding gradients
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 11 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 15
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 220
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 3
# Show stats for every n number of batches
show_every_n_batches = 1000
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 15 epoch(s)...
Epoch: 1/15 Loss: 5.551783257961273
Epoch: 1/15 Loss: 4.771057545185089
Epoch: 1/15 Loss: 4.557967208623886
Epoch: 1/15 Loss: 4.451229191541672
Epoch: 1/15 Loss: 4.3751030037403105
Epoch: 1/15 Loss: 4.307648555755615
Epoch: 2/15 Loss: 4.179115289840195
Epoch: 2/15 Loss: 4.070408574819565
Epoch: 2/15 Loss: 4.075216448307037
Epoch: 2/15 Loss: 4.064519879817962
Epoch: 2/15 Loss: 4.052738160848618
Epoch: 2/15 Loss: 4.0327731063365935
Epoch: 3/15 Loss: 3.9470079830299483
Epoch: 3/15 Loss: 3.867176472187042
Epoch: 3/15 Loss: 3.8864018812179566
Epoch: 3/15 Loss: 3.8922921442985534
Epoch: 3/15 Loss: 3.868153561115265
Epoch: 3/15 Loss: 3.875649547100067
Epoch: 4/15 Loss: 3.8085734866003267
Epoch: 4/15 Loss: 3.761287745475769
Epoch: 4/15 Loss: 3.7519507703781128
Epoch: 4/15 Loss: 3.7585046875476835
Epoch: 4/15 Loss: 3.778041011095047
Epoch: 4/15 Loss: 3.7760436074733734
Epoch: 5/15 Loss: 3.709826213203147
Epoch: 5/15 Loss: 3.6605616896152497
Epoch: 5/15 Loss: 3.6642795016765595
Epoch: 5/15 Loss: 3.6813820621967315
Epoch: 5/15 Loss: 3.6950231969356535
Epoch: 5/15 Loss: 3.709610483407974
Epoch: 6/15 Loss: 3.6399432192483245
Epoch: 6/15 Loss: 3.5923170359134673
Epoch: 6/15 Loss: 3.6193778476715086
Epoch: 6/15 Loss: 3.6184697694778443
Epoch: 6/15 Loss: 3.613909814834595
Epoch: 6/15 Loss: 3.6400432991981506
Epoch: 7/15 Loss: 3.580007696950902
Epoch: 7/15 Loss: 3.533606464624405
Epoch: 7/15 Loss: 3.5447707233428956
Epoch: 7/15 Loss: 3.5375457437038422
Epoch: 7/15 Loss: 3.5696508259773254
Epoch: 7/15 Loss: 3.5918638756275176
Epoch: 8/15 Loss: 3.547452989600527
Epoch: 8/15 Loss: 3.491873105287552
Epoch: 8/15 Loss: 3.488748780488968
Epoch: 8/15 Loss: 3.516300818681717
Epoch: 8/15 Loss: 3.530898873567581
Epoch: 8/15 Loss: 3.553862015247345
Epoch: 9/15 Loss: 3.4885908456669656
Epoch: 9/15 Loss: 3.460539013147354
Epoch: 9/15 Loss: 3.4609071877002715
Epoch: 9/15 Loss: 3.480745003938675
Epoch: 9/15 Loss: 3.4771428878307344
Epoch: 9/15 Loss: 3.512293109416962
Epoch: 10/15 Loss: 3.4614001480439645
Epoch: 10/15 Loss: 3.4096969039440155
Epoch: 10/15 Loss: 3.4219684290885923
Epoch: 10/15 Loss: 3.4503337030410766
Epoch: 10/15 Loss: 3.46154877114296
Epoch: 10/15 Loss: 3.4860416829586027
Epoch: 11/15 Loss: 3.435480975087613
Epoch: 11/15 Loss: 3.3809241626262665
Epoch: 11/15 Loss: 3.409766830444336
Epoch: 11/15 Loss: 3.4205078206062316
Epoch: 11/15 Loss: 3.436436573266983
Epoch: 11/15 Loss: 3.458058296918869
Epoch: 12/15 Loss: 3.397295983083482
Epoch: 12/15 Loss: 3.365796604394913
Epoch: 12/15 Loss: 3.3796579282283785
Epoch: 12/15 Loss: 3.4012505297660827
Epoch: 12/15 Loss: 3.4084031472206116
Epoch: 12/15 Loss: 3.4305192005634306
Epoch: 13/15 Loss: 3.3747973559531177
Epoch: 13/15 Loss: 3.333830099105835
Epoch: 13/15 Loss: 3.353144202709198
Epoch: 13/15 Loss: 3.379785180568695
Epoch: 13/15 Loss: 3.385064254760742
Epoch: 13/15 Loss: 3.4118691589832304
Epoch: 14/15 Loss: 3.368195487618628
Epoch: 14/15 Loss: 3.3238744597434997
Epoch: 14/15 Loss: 3.3331131815910338
Epoch: 14/15 Loss: 3.3434905445575716
Epoch: 14/15 Loss: 3.376626351118088
Epoch: 14/15 Loss: 3.386060166597366
Epoch: 15/15 Loss: 3.3307401964421075
Epoch: 15/15 Loss: 3.293493180513382
Epoch: 15/15 Loss: 3.313623753786087
Epoch: 15/15 Loss: 3.338271718263626
Epoch: 15/15 Loss: 3.3491438009738923
Epoch: 15/15 Loss: 3.362390919685364
Model Trained and Saved
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here)- Batch size: 64 and 256 had a loss of app. 3.9, whereas 128 had the best result of around 3.1.- learning_rate: the standard of 0.001 worked well off the shelf and in combination with higher epoch number I was comfortable it would converge eventually.- embedding_dim: tried the same value of 200 as suggested from the lessons and worked well.- n_layers: 3, 4, and 5 had higher loss than 3 layers.- hidden_dim: 128 and 512 had significantly higher loss than 256.- sequence_length: value > 11 did not improve the performance so I settled for 11 (10 had a near similar performance, though). Ideally, I would have liked to tweaked this param further but I am satisfied with the current performance.- num_epochs: set to 15 (training locally on an Nvidia 1080 Ti). From previous experience anything beyond 50 would not have a significant effect on the loss. I would have increased to 50 for best loss. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.cpu().numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 2000 # modify the length to your preference
prime_word = 'elaine' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
elaine: hummm borrowing cinqo woah, but you can't get me a quarter?
george:(pointing) you know what?!
kramer:(to the phone) what do you do?
kramer: oh! sotheby's! employees! burning a lot of olive waffles!
george: well, i just don't know how you feel!
jerry: well you know i don't know. i know, i'm not going to get the camera, and i'm gonna go to the airport, i'm gonna be a little sweetie...
jerry: yeah.
kramer: well, schedules and gentlemen, you know, i know what you do.
elaine: well, you don't have to be a good person. you don't know what you do. i can't believe this was a little bit.
jerry: you mean, the basic thing that you have.(jerry looks around) oh yeah, i can't get it for you.
jerry: well, maybe i could get some more of this, i can't get that.(to jerry)
jerry: i can't believe i'm going to be able to get a little more of the company's pizza. i can't do this.(jerry enters, she leaves)
george: hey, hey, hey. bastards. theaters. oh, hi, hi. hi elaine.
kramer:(from intercom) oh, hi.
jerry: hi. carson, hi.(he hangs up.)
elaine: hey, hey, how was that?
george: i think i can go with it.(she looks at his watch)..
jerry: i know. astroturf. koren...
jerry: oh! blocked it! thirsty!
jerry: hey, hey! hey! bastards! hamilton: the movie!
kramer: well, schedules and gentlemen---
elaine: oh, yeah, sure.
jerry: oh, yeah, i know.
elaine: i can't believe this. i don't know how i was in a coma.
kramer: well, i think we can get a cab, i know, it's not the one.
elaine: i don't know.
george:(looking at the phone) hello?(to jerry) hey, i gotta go see the pharmacist.
george:(to george) so?
jerry: you can't have a big salad?
elaine: yeah, yeah? i don't know. i know what i'm doing.
george: you mean you can't have it.
jerry: oh, no no no, no, no. i just can't..
kramer:(on the street) hello.
george: hey, what are you doing here?
kramer: oh, no... burning the slob.
george: you know, you don't know how much you do.
kramer: i know.
elaine:(to george) what?!
kramer: hey, hey. holders. burning. burning the vault.
jerry: oh, hi.
jerry: oh, grandpa.
jerry: i don't think so.
helen: you know i can't get the veggie burger.(jerry is trying to keep the door.)
jerry:(to jerry) i don't want to see you.
jerry: i think i could do this.
george:(to jerry) you see, you should have any time.
jerry: what is the point of that?
elaine: no, no. no. triangles. no.
elaine: oh! blocked it!
george: no no no, it's no!
kramer: oh, i don't think so!
kramer:(to elaine) hey!
jerry:(to kramer) hey, i got a great meal for you.
kramer: yeah, i don't think so. you know, i don't want you to do it.
jerry: well, what is it?
jerry: i thought you hated the show.
kramer: yeah.
jerry: oh, feigning.
kramer: yeah, yeah.
elaine: i know.. i.. [drumming the whole thing.
elaine:(to the cashier) you know i got that.
elaine: i know how much i can.(she leaves)
jerry:(to elaine) i can't believe you're going.
jerry: oh, i can't get it. i'm gonna get the hell out of my mind.
george: you know, i can't believe i'm doing that. i think you can get it back with me.
kramer: well, i think you can get a ride.
elaine:(to jerry) what is the matter here?
jerry: what?
kramer: i got it.
george: oh, come on, let's go to the bathroom.
kramer: hey, you got a date.
elaine: i can't believe you were in the mood.
george: well, i know what this is.
george:(to elaine) what are you doing?
kramer: oh, i don't think so.
jerry: i think i should.
kramer:(to the door) you know, you know.
kramer: yeah, yeah.
george: you know what i think about this guy? i mean," what do you think?!!
jerry: i don't know. knocked it in the way.
kramer:(to george) what is it?
george: well, you know, i don't think i can have to do that.
jerry: i thought you had any idea.
elaine: well, i guess i should be there, i can't...
kramer:(to jerry) what? what do you say?
newman: i think i'm gonna have to say something to you. i don't even know how i can.
elaine: well, you know, i'm not really interested in the car.
kramer: oh, yeah. yeah. yeah...(he hangs up the door. jerry and george are talking]) oh, i think you're so cool!
jerry:(to george) i mean, i have a very radical thing for the show.
kramer: well, i don't know.
jerry:(looking at jerry) you think you can see the doctor?
kramer:(pointing) oh, no, i didn't think i have to tell you...
kramer: hey!
jerry: hey jerry! burning this!!!!.. gain."
jimmy:" well you know, i was wondering if you can see each other, but i don't have to go.
jerry: i can't believe you're going to be a little..(jerry is shocked, then stops to be a gesture, but puts it up, and then i can't be able to be able to make a cab to get it out, and we'll be able to be a very courageous friend of my own life.
jerry:(still trying to take a look of the novocaine, and i don't want to go with you.(kramer is shown)
[setting: the coffee shop]
jerry: you know.
george: oh. holders. burning the candles for a second.
jerry: i can't believe you're talking about this.
george: i can't. i'm sorry.
jerry: well, i think i'm gonna be a little sweetie tweetie weetie weetie.
newman: yeah? well, i'm not going to be a little bit of the own york.
elaine: you know, it's like a man are a pretty sensitive, huh?
elaine: oh yeah.
jerry:(to the door) i know.
jerry:(to jerry) so what?
jerry: i don't know. knocked the keys off.
kramer: hey.
kramer: hey, blocked it up, george.(to george) hey, i got a little good.
elaine: oh, no no no i didn't. i think i can go out of my apartment.
kramer: hey.
george: what are you doing here for?
jerry: i can't believe this!
jerry: you know, i think it's great.
jerry: oh, you know that, you know, you were just going for a little, you know, it's all over, but i know how i could have been a little bit on it.
george: i know, i think i was going to get out of this, but i don't have to be a little eccentric, i'm sorry. i know what i'm doing.
george: oh, i can't do this!
jerry:(to kramer) hey, i think it's the most important thing.
jerry: you know what? i can't believe it.
elaine: well, what are you doin'?
kramer: i don't know. i don't know what you do, i mean, i have no idea...(kramer enters)
jerry:(to jerry) i can't believe it.
kramer: yeah!
elaine: i don't know, i think it's...
jerry: i know. i'm going to a prostitute!(kramer is shown, and starts dancing and walks down to the door) hey, what do you think?
jerry:(pointing) oh, yeah.
george: i think he thinks that.
george: well, maybe i was going to be a little harsh to the rest of the building.
jerry: oh, no, i got the car.
george: what do i say?
elaine:(confused) yeah, i just don't.
george: i know, i don't know. you don't have a job, you know, i know, i don't know, i can't believe i'm gonna be able to know.
jerry: well i don't know, i just thought you could go
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
# words = text.split()
word_counts = Counter(text)
# sort the word from the most to the least frequent in text occurance
sorted_vocab = sorted(word_counts ,key=word_counts.get, reverse=True)
#create int_to_vocab dictionaries
int_to_vocab = { idx : word for idx,word in enumerate(sorted_vocab)}
vocab_to_int = {word : idx for idx , word in int_to_vocab.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
puntuation = ['.', ',', '"', ';', '!', '?', '(', ')', '-', '\n']
token = [
'PERIOD',
'COMMA',
'QUOTATION_MARK',
'SEMICOLON',
'EXCLAMATION_MARK',
'QUESTION_MARK',
'LEFT_PAREN',
'RIGHT_PAREN',
'HYPHENS',
'QUESTION_MARK'
]
token_dict = dict(zip(puntuation, token))
return token_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
x , y = [], []
for i in range(len(words)):
if (i + sequence_length) < len(words):
x.append(words[i:i + sequence_length])
y.append(words[i + sequence_length])
# Creating tensor data
feature_tensors = torch.from_numpy(np.asarray(x))
target_tensors = torch.from_numpy(np.asarray(y))
data = TensorDataset(feature_tensors,target_tensors)
data_loader = torch.utils.data.DataLoader(data,
batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]])
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.hidden_dim = hidden_dim
self.n_layers = n_layers
# embedding and LSTM layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout= dropout, batch_first=True)
# dropout layer
self.dropout = nn.Dropout(0.3)
# define model layers
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
#embedding and lstm_out
nn_input = nn_input.long()
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
#stacking lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
#dropout and fully-connected layer
output = self.dropout(lstm_out)
output = self.fc(output)
# reshape to the batch size first
output = output.view(batch_size, -1, self.output_size)
# get last batch
out = output[:, -1]
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# create two new tensors with sizes n_layers x batch_size x hidden_dim
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if (train_on_gpu):
rnn.cuda()
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
# we will backpropagate the entire training history
hidden = tuple([each.data for each in hidden])
# zero accumulated gradient
rnn.zero_grad()
# get output from model
output, hidden = rnn(inp, hidden)
# calculating loss and perform backprop
loss = criterion(output, target)
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 256
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 20
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)# +1 for the 0 padding + our word tokens
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 300
# Hidden Dimension
hidden_dim = 512
# Number of RNN Layers
n_layers = 3
# Show stats for every n number of batches
show_every_n_batches = 3000
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from workspace_utils import active_session
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
with active_session():
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 20 epoch(s)...
Epoch: 1/20 Loss: 5.8686649867693585
Epoch: 2/20 Loss: 5.344936761461908
Epoch: 3/20 Loss: 4.352071378110607
Epoch: 4/20 Loss: 4.054746726555611
Epoch: 5/20 Loss: 3.8661262297876666
Epoch: 6/20 Loss: 3.7362306767435216
Epoch: 7/20 Loss: 3.6377746602424113
Epoch: 8/20 Loss: 3.5598370685369356
Epoch: 9/20 Loss: 3.491453578770503
Epoch: 10/20 Loss: 3.4322823315482465
Epoch: 11/20 Loss: 3.378375462768546
Epoch: 12/20 Loss: 3.330190904789212
Epoch: 13/20 Loss: 3.289626871677272
Epoch: 14/20 Loss: 3.2507561165347574
Epoch: 15/20 Loss: 3.216466604294651
Epoch: 16/20 Loss: 3.183058332767333
Epoch: 17/20 Loss: 3.1527546565513522
Epoch: 18/20 Loss: 3.120537461834852
Epoch: 19/20 Loss: 3.0933244806617055
Epoch: 20/20 Loss: 3.0724417322949344
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** Below are the hyperparameters which I used in the beginingsequence_length :- 10hidden_dim:- 256(So as to extract more feature for model to learn better)n_layer:- 3(As mentioned a number of layer is 2 or 3. A large number of layers would not decrease the error however it will take more time to train the model)embedding_dim = 200 **Result**Epoch: 20/20 Loss: 3.5827105351289115Hypertune to below parametersembedding_dim = 300Epoch: 20/20 Loss: 3.0724417322949344**Result**Epoch: 20/20 Loss: 3.0724417322949344 I did hypertuned the sequence length from 15 to 10.However I didnt find any significant difference in the performance. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:46: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
text = set(text)
vocab_to_int = {word: i for i, word in enumerate(text, 0)}
int_to_vocab = {vocab_to_int[word]: word for word in text}
# return tuple
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
symbols_dict = {
'.':"||Period||",
',':"||Comma||",
'"':"||Quotation_Mark||",
';':"||Semicolon||",
'!':"||Exclamation_mark||",
'?':"||Question_mark||",
'(':"||Left_Parentheses||",
')':"||Right_Parentheses||",
'-':"||Dash||",
'\n':"||Return||"
}
return symbols_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
words = words[:(len(words)//batch_size)*batch_size]
num_of_sequence = len(words) - sequence_length
print("len(words) ", len(words))
print("num_of_sequence ", num_of_sequence)
feature_tensors = []
target_tensors = []
for ith_sequence in range(num_of_sequence):
feature_tensors.append(words[ith_sequence:ith_sequence + sequence_length])
target_tensors.append(words[ith_sequence + sequence_length])
data = TensorDataset(torch.from_numpy(np.asarray(feature_tensors)), torch.from_numpy(np.asarray(target_tensors)))
data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
import numpy as np
test_text = range(20)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
len(words) 20
num_of_sequence 15
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]])
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define model layers
#embedding and LSTM layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
# linear layers
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
# embeddings and lstm_out
nn_input = nn_input.long()
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
out = self.fc(lstm_out)
# reshape into (batch_size, seq_length, output_size)
out = out.view(batch_size, -1, self.output_size)
out = out[:, -1] # get last batch of labels
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
weight = next(self.parameters()).data
# initialize hidden state with zero weights, and move to GPU if available
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if(train_on_gpu):
rnn.cuda()
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
hidden = tuple([each.data for each in hidden])
# zero accumulated gradients
rnn.zero_grad()
# get the output from the model
output, hidden = rnn(inp, hidden)
# calculate the loss and perform backprop
loss = criterion(output, target)
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 250
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 5.5062090702056885
Epoch: 1/10 Loss: 4.926387922286987
Epoch: 1/10 Loss: 4.686842391490936
Epoch: 1/10 Loss: 4.559702690601349
Epoch: 1/10 Loss: 4.553887212276459
Epoch: 1/10 Loss: 4.584089871406555
Epoch: 1/10 Loss: 4.483172094345092
Epoch: 1/10 Loss: 4.359795897483826
Epoch: 1/10 Loss: 4.333286078453064
Epoch: 1/10 Loss: 4.2777494196891785
Epoch: 1/10 Loss: 4.384523750782013
Epoch: 1/10 Loss: 4.415594127655029
Epoch: 1/10 Loss: 4.410172013759613
Epoch: 2/10 Loss: 4.209325594104026
Epoch: 2/10 Loss: 4.037347249984741
Epoch: 2/10 Loss: 3.9370961766242982
Epoch: 2/10 Loss: 3.896875053882599
Epoch: 2/10 Loss: 3.9483588485717775
Epoch: 2/10 Loss: 4.031085283756256
Epoch: 2/10 Loss: 3.969794249534607
Epoch: 2/10 Loss: 3.85464750289917
Epoch: 2/10 Loss: 3.869726559638977
Epoch: 2/10 Loss: 3.8377918701171874
Epoch: 2/10 Loss: 3.962148720741272
Epoch: 2/10 Loss: 3.9716279058456423
Epoch: 2/10 Loss: 3.9729313821792602
Epoch: 3/10 Loss: 3.886751433049352
Epoch: 3/10 Loss: 3.7983715415000914
Epoch: 3/10 Loss: 3.7215052394866945
Epoch: 3/10 Loss: 3.6959203553199766
Epoch: 3/10 Loss: 3.7014833650588987
Epoch: 3/10 Loss: 3.794395987510681
Epoch: 3/10 Loss: 3.742502285003662
Epoch: 3/10 Loss: 3.6467981848716735
Epoch: 3/10 Loss: 3.662913607120514
Epoch: 3/10 Loss: 3.6172107772827147
Epoch: 3/10 Loss: 3.735170942783356
Epoch: 3/10 Loss: 3.76316597032547
Epoch: 3/10 Loss: 3.7524899091720583
Epoch: 4/10 Loss: 3.707112671915165
Epoch: 4/10 Loss: 3.640891709804535
Epoch: 4/10 Loss: 3.5662037839889527
Epoch: 4/10 Loss: 3.551697920322418
Epoch: 4/10 Loss: 3.5387388339042665
Epoch: 4/10 Loss: 3.642184967517853
Epoch: 4/10 Loss: 3.5977067370414733
Epoch: 4/10 Loss: 3.4978094353675844
Epoch: 4/10 Loss: 3.5224487719535826
Epoch: 4/10 Loss: 3.4816968317031862
Epoch: 4/10 Loss: 3.620203785419464
Epoch: 4/10 Loss: 3.6233915395736696
Epoch: 4/10 Loss: 3.6023660225868226
Epoch: 5/10 Loss: 3.57944662117761
Epoch: 5/10 Loss: 3.5296650643348695
Epoch: 5/10 Loss: 3.462427608013153
Epoch: 5/10 Loss: 3.4440136847496032
Epoch: 5/10 Loss: 3.438046865463257
Epoch: 5/10 Loss: 3.530569804191589
Epoch: 5/10 Loss: 3.4985185546875
Epoch: 5/10 Loss: 3.3948234634399412
Epoch: 5/10 Loss: 3.408670659542084
Epoch: 5/10 Loss: 3.380607274532318
Epoch: 5/10 Loss: 3.521312997817993
Epoch: 5/10 Loss: 3.518187889099121
Epoch: 5/10 Loss: 3.501225535392761
Epoch: 6/10 Loss: 3.49327678793718
Epoch: 6/10 Loss: 3.44753271150589
Epoch: 6/10 Loss: 3.3876585803031922
Epoch: 6/10 Loss: 3.35939821767807
Epoch: 6/10 Loss: 3.3505120205879213
Epoch: 6/10 Loss: 3.4511019620895387
Epoch: 6/10 Loss: 3.419953513622284
Epoch: 6/10 Loss: 3.3165820841789246
Epoch: 6/10 Loss: 3.3282022681236265
Epoch: 6/10 Loss: 3.308493359565735
Epoch: 6/10 Loss: 3.436646268367767
Epoch: 6/10 Loss: 3.4430289816856385
Epoch: 6/10 Loss: 3.427484694004059
Epoch: 7/10 Loss: 3.420297096583469
Epoch: 7/10 Loss: 3.379414544582367
Epoch: 7/10 Loss: 3.3211863827705383
Epoch: 7/10 Loss: 3.3046035833358767
Epoch: 7/10 Loss: 3.2964598355293275
Epoch: 7/10 Loss: 3.3955893025398254
Epoch: 7/10 Loss: 3.3634757323265077
Epoch: 7/10 Loss: 3.260946491241455
Epoch: 7/10 Loss: 3.2705009050369265
Epoch: 7/10 Loss: 3.2532875366210936
Epoch: 7/10 Loss: 3.371650794506073
Epoch: 7/10 Loss: 3.384002482891083
Epoch: 7/10 Loss: 3.3693463759422304
Epoch: 8/10 Loss: 3.367954517198988
Epoch: 8/10 Loss: 3.342444756984711
Epoch: 8/10 Loss: 3.2698617420196534
Epoch: 8/10 Loss: 3.259161334514618
Epoch: 8/10 Loss: 3.258579605102539
Epoch: 8/10 Loss: 3.3561257271766665
Epoch: 8/10 Loss: 3.326989011287689
Epoch: 8/10 Loss: 3.2244384112358095
Epoch: 8/10 Loss: 3.2178842692375182
Epoch: 8/10 Loss: 3.213238892555237
Epoch: 8/10 Loss: 3.323213146686554
Epoch: 8/10 Loss: 3.336139287471771
Epoch: 8/10 Loss: 3.328948728084564
Epoch: 9/10 Loss: 3.326259451464188
Epoch: 9/10 Loss: 3.2946871476173403
Epoch: 9/10 Loss: 3.236014334201813
Epoch: 9/10 Loss: 3.2235552659034727
Epoch: 9/10 Loss: 3.2148429217338563
Epoch: 9/10 Loss: 3.312799217224121
Epoch: 9/10 Loss: 3.285686270236969
Epoch: 9/10 Loss: 3.1816952877044677
Epoch: 9/10 Loss: 3.18164088344574
Epoch: 9/10 Loss: 3.174444121837616
Epoch: 9/10 Loss: 3.2793217034339905
Epoch: 9/10 Loss: 3.2971090664863585
Epoch: 9/10 Loss: 3.284891571521759
Epoch: 10/10 Loss: 3.2858398820250487
Epoch: 10/10 Loss: 3.25834157705307
Epoch: 10/10 Loss: 3.19934152507782
Epoch: 10/10 Loss: 3.185322557926178
Epoch: 10/10 Loss: 3.1726383724212646
Epoch: 10/10 Loss: 3.267930762767792
Epoch: 10/10 Loss: 3.2477094926834105
Epoch: 10/10 Loss: 3.1451764459609985
Epoch: 10/10 Loss: 3.141101454257965
Epoch: 10/10 Loss: 3.134340192317963
Epoch: 10/10 Loss: 3.236891791820526
Epoch: 10/10 Loss: 3.271589668750763
Epoch: 10/10 Loss: 3.2643052315711976
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** The output size would be the size of the vocabulary which tells which vocabulary appears to be the best prediction. By comparing to the course example from the Lesson6 -- Sentiment Prediction RNN. My library has 46367 words. I decided to choose some smaller embedding dimension in my case, because my library size is relevent smaller. The embedding dimension was 200 here to represent each of the vacuabulary in my library. The instructor choosed a hidden layer of 256. That turned out to be a good number for distinguish between positive and negative reviews. Similarly, for my hidden dimension, I thought 250 hidden features should be enough to give a decent reasoning to make a good pridiction for the next word. I used the same number of the layers for LSTM as 2 layer should be enough for this model size. I started with 5 epochs to check my learning rate and get a feeling of how well the loss will decrease. I found that 0.001 learning rate led to a healthy loss decline. The loss would not jump up and down too much. Finally, with 5 epochs, the loss reduced to 3.5/3.6 ish. Then, I decided to have 10 epochs and that should be more than enough to train below 3.5; The loss was lower than 3.3 at the 10th epochs. reference: Udacity Deep Learning Nanodegree Program: RNN Lesson6 - Sentiment Prediction RNN - 12.Training the Model https://youtu.be/yCC09vCHzF8 Stanford-CS231N -- Lecture 10: Recurrent Neural Networks, Image Captioning, LSTM https://youtu.be/8rXD5-xhemo Stanford-CS224N -- Lecture 1 – Introduction and Word Vectors https://youtu.be/kEMJRjEdNzM Stanford-CS224N -- Lecture 2 – Word Vectors and Word Senses --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:42: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import os
import helper
data_dir = './data/Seinfeld_Scripts.txt'
os.environ['CUDA_LAUNCH_BLOCKING'] = "1"
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
# create set of unique words
vocab_set = set(text)
# create empty dictionaries
vocab_to_int = {}
int_to_vocab = {}
# loop over set of words and them to the dicts
for idx, word in enumerate(vocab_set):
vocab_to_int[word] = idx
int_to_vocab[idx] = word
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
tokenized_dict = {
'.' : '<PERIOD>',
',' : '<COMMA>',
'"' : '<QUOTATION_MARK>',
';' : '<SEMICOLON>',
'!' : '<EXCLAMATION_MARK>',
'?' : '<QUESTION_MARK>',
'(' : '<LEFT_PAREN>',
')' : '<RIGHT_PAREN>',
'-' : '<DASH>',
'\n': '<RETURN>'
}
return tokenized_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# create empty features and target lists
features_list = []
target_list = []
# loop over words list to create features lists of length sequence_length and corresponding targets
for i in range(0,len(words)-sequence_length,1):
features_list.append(words[i:i+sequence_length])
target_list.append(words[i+sequence_length])
# convert to numpy
features_np = np.array(features_list,dtype=int)
target_np = np.array(target_list, dtype=int)
# create dataset and dataloader
data = TensorDataset(torch.from_numpy(features_np), torch.from_numpy(target_np))
data_loader = torch.utils.data.DataLoader(data,
batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]])
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define model layers
# embedding and LSTM layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,
dropout=dropout, batch_first=True)
# dropout layer
#self.dropout = nn.Dropout(0.3)
# linear and softmax layers
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
# embeddings and lstm_out
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
#out = self.dropout(lstm_out)
out = self.fc(lstm_out)
# softmax function
# reshape into (batch_size, seq_length, output_size)
output = out.view(batch_size, -1, self.output_size)
# get last batch
out = output[:, -1]
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
clip = 5
# move data to GPU, if available
if(train_on_gpu):
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
hidden = tuple([each.data for each in hidden])
# zero accumulated gradients
rnn.zero_grad()
# get the output from the model
output, hidden = rnn(inp, hidden)
# calculate the loss and perform backprop
loss = criterion(output.squeeze(), target.long())
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(rnn.parameters(), clip)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 64
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 16
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 300
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 16 epoch(s)...
Epoch: 1/16 Loss: 5.556302359104157
Epoch: 1/16 Loss: 4.979314372062683
Epoch: 1/16 Loss: 4.84900372171402
Epoch: 1/16 Loss: 4.659985776901245
Epoch: 1/16 Loss: 4.602403111934662
Epoch: 1/16 Loss: 4.451942446231842
Epoch: 1/16 Loss: 4.387503125667572
Epoch: 1/16 Loss: 4.446487414360046
Epoch: 1/16 Loss: 4.518097880840301
Epoch: 1/16 Loss: 4.370514465332032
Epoch: 1/16 Loss: 4.46170058298111
Epoch: 1/16 Loss: 4.505538481712342
Epoch: 1/16 Loss: 4.428290634632111
Epoch: 1/16 Loss: 4.359592231750488
Epoch: 1/16 Loss: 4.379877077579498
Epoch: 1/16 Loss: 4.196736328125
Epoch: 1/16 Loss: 4.233899799823761
Epoch: 1/16 Loss: 4.286175658226013
Epoch: 1/16 Loss: 4.244526471614837
Epoch: 1/16 Loss: 4.173489804267883
Epoch: 1/16 Loss: 4.307819705486297
Epoch: 1/16 Loss: 4.378726980686188
Epoch: 1/16 Loss: 4.405597256660461
Epoch: 1/16 Loss: 4.364223153591156
Epoch: 1/16 Loss: 4.358129034042358
Epoch: 1/16 Loss: 4.401098453521729
Epoch: 1/16 Loss: 4.308312260627747
Epoch: 2/16 Loss: 4.20590086155933
Epoch: 2/16 Loss: 4.032041732788086
Epoch: 2/16 Loss: 4.033846144676208
Epoch: 2/16 Loss: 3.9798114042282107
Epoch: 2/16 Loss: 3.969199767112732
Epoch: 2/16 Loss: 3.8885928826332092
Epoch: 2/16 Loss: 3.8568816151618956
Epoch: 2/16 Loss: 3.9143447952270507
Epoch: 2/16 Loss: 4.041599536418915
Epoch: 2/16 Loss: 3.9226250176429747
Epoch: 2/16 Loss: 4.036768639564515
Epoch: 2/16 Loss: 4.1137974934577946
Epoch: 2/16 Loss: 4.032521290779114
Epoch: 2/16 Loss: 3.9744135971069334
Epoch: 2/16 Loss: 3.9944256772994997
Epoch: 2/16 Loss: 3.855602966308594
Epoch: 2/16 Loss: 3.897054878950119
Epoch: 2/16 Loss: 3.943911460876465
Epoch: 2/16 Loss: 3.9209304070472717
Epoch: 2/16 Loss: 3.8439134006500244
Epoch: 2/16 Loss: 3.97528605222702
Epoch: 2/16 Loss: 4.055018325805664
Epoch: 2/16 Loss: 4.095055522918702
Epoch: 2/16 Loss: 4.085620954036712
Epoch: 2/16 Loss: 4.074289466381073
Epoch: 2/16 Loss: 4.094517741203308
Epoch: 2/16 Loss: 4.029502012729645
Epoch: 3/16 Loss: 3.9551477815650395
Epoch: 3/16 Loss: 3.8580670561790464
Epoch: 3/16 Loss: 3.8533291969299315
Epoch: 3/16 Loss: 3.7982065424919127
Epoch: 3/16 Loss: 3.7892553243637086
Epoch: 3/16 Loss: 3.7233733973503114
Epoch: 3/16 Loss: 3.680432620048523
Epoch: 3/16 Loss: 3.751641619682312
Epoch: 3/16 Loss: 3.8620801944732666
Epoch: 3/16 Loss: 3.7296109499931336
Epoch: 3/16 Loss: 3.8655763487815857
Epoch: 3/16 Loss: 3.9521175570487976
Epoch: 3/16 Loss: 3.8666548681259156
Epoch: 3/16 Loss: 3.822467324256897
Epoch: 3/16 Loss: 3.8306173009872437
Epoch: 3/16 Loss: 3.7318817849159243
Epoch: 3/16 Loss: 3.784616421699524
Epoch: 3/16 Loss: 3.8026616296768188
Epoch: 3/16 Loss: 3.7715783567428587
Epoch: 3/16 Loss: 3.7014256386756896
Epoch: 3/16 Loss: 3.8371938734054565
Epoch: 3/16 Loss: 3.9226734013557434
Epoch: 3/16 Loss: 3.9431958231925965
Epoch: 3/16 Loss: 3.932363947868347
Epoch: 3/16 Loss: 3.9254627542495726
Epoch: 3/16 Loss: 3.9728415536880495
Epoch: 3/16 Loss: 3.9136094856262207
Epoch: 4/16 Loss: 3.8323622534450252
Epoch: 4/16 Loss: 3.7639626865386964
Epoch: 4/16 Loss: 3.7801339983940125
Epoch: 4/16 Loss: 3.7508353934288023
Epoch: 4/16 Loss: 3.7057657923698426
Epoch: 4/16 Loss: 3.628568684577942
Epoch: 4/16 Loss: 3.614459502220154
Epoch: 4/16 Loss: 3.6540659584999085
Epoch: 4/16 Loss: 3.7575233483314516
Epoch: 4/16 Loss: 3.6267790617942812
Epoch: 4/16 Loss: 3.7644733295440673
Epoch: 4/16 Loss: 3.8383875560760496
Epoch: 4/16 Loss: 3.753015235900879
Epoch: 4/16 Loss: 3.7012168798446656
Epoch: 4/16 Loss: 3.7299409189224244
Epoch: 4/16 Loss: 3.6423277888298036
Epoch: 4/16 Loss: 3.6509691717624664
Epoch: 4/16 Loss: 3.688693720817566
Epoch: 4/16 Loss: 3.6663457927703855
Epoch: 4/16 Loss: 3.604248815536499
Epoch: 4/16 Loss: 3.7487168498039245
Epoch: 4/16 Loss: 3.805315224170685
Epoch: 4/16 Loss: 3.8399218702316285
Epoch: 4/16 Loss: 3.830983470916748
Epoch: 4/16 Loss: 3.805719886779785
Epoch: 4/16 Loss: 3.8433204574584963
Epoch: 4/16 Loss: 3.803190098285675
Epoch: 5/16 Loss: 3.7365238333916384
Epoch: 5/16 Loss: 3.6820581741333007
Epoch: 5/16 Loss: 3.6787782549858092
Epoch: 5/16 Loss: 3.6611591284275056
Epoch: 5/16 Loss: 3.637703104496002
Epoch: 5/16 Loss: 3.5480308547019956
Epoch: 5/16 Loss: 3.5189965586662293
Epoch: 5/16 Loss: 3.5707608137130737
Epoch: 5/16 Loss: 3.662746154785156
Epoch: 5/16 Loss: 3.560943691730499
Epoch: 5/16 Loss: 3.6744477415084837
Epoch: 5/16 Loss: 3.7533259778022767
Epoch: 5/16 Loss: 3.6875819849967955
Epoch: 5/16 Loss: 3.6351495785713195
Epoch: 5/16 Loss: 3.664176484584808
Epoch: 5/16 Loss: 3.571788050174713
Epoch: 5/16 Loss: 3.5730915801525116
Epoch: 5/16 Loss: 3.5892826557159423
Epoch: 5/16 Loss: 3.602575134754181
Epoch: 5/16 Loss: 3.5334886107444765
Epoch: 5/16 Loss: 3.6751333742141723
Epoch: 5/16 Loss: 3.734247670173645
Epoch: 5/16 Loss: 3.762551634311676
Epoch: 5/16 Loss: 3.7644194331169127
Epoch: 5/16 Loss: 3.7260990495681763
Epoch: 5/16 Loss: 3.769739712238312
Epoch: 5/16 Loss: 3.7221431250572206
Epoch: 6/16 Loss: 3.6653555544039693
Epoch: 6/16 Loss: 3.625122379541397
Epoch: 6/16 Loss: 3.6249263529777527
Epoch: 6/16 Loss: 3.6012139415740965
Epoch: 6/16 Loss: 3.5939000458717345
Epoch: 6/16 Loss: 3.4900908751487734
Epoch: 6/16 Loss: 3.4559272351264956
Epoch: 6/16 Loss: 3.527843885421753
Epoch: 6/16 Loss: 3.607867751121521
Epoch: 6/16 Loss: 3.5076293301582337
Epoch: 6/16 Loss: 3.6174275646209715
Epoch: 6/16 Loss: 3.7046808667182924
Epoch: 6/16 Loss: 3.630301184177399
Epoch: 6/16 Loss: 3.5854281044006346
Epoch: 6/16 Loss: 3.5940018877983095
Epoch: 6/16 Loss: 3.524826368331909
Epoch: 6/16 Loss: 3.5248674461841585
Epoch: 6/16 Loss: 3.545446871757507
Epoch: 6/16 Loss: 3.5489454789161683
Epoch: 6/16 Loss: 3.482248704433441
Epoch: 6/16 Loss: 3.632452450275421
Epoch: 6/16 Loss: 3.671952172279358
Epoch: 6/16 Loss: 3.703545093536377
Epoch: 6/16 Loss: 3.7010049157142637
Epoch: 6/16 Loss: 3.6731730046272277
Epoch: 6/16 Loss: 3.6949665684700013
Epoch: 6/16 Loss: 3.665810742378235
Epoch: 7/16 Loss: 3.611085271429076
Epoch: 7/16 Loss: 3.589846199989319
Epoch: 7/16 Loss: 3.569016757965088
Epoch: 7/16 Loss: 3.5480982704162596
Epoch: 7/16 Loss: 3.5357533931732177
Epoch: 7/16 Loss: 3.4459022121429443
Epoch: 7/16 Loss: 3.4172794380187987
Epoch: 7/16 Loss: 3.477015841960907
Epoch: 7/16 Loss: 3.552580554008484
Epoch: 7/16 Loss: 3.459586145401001
Epoch: 7/16 Loss: 3.562423429250717
Epoch: 7/16 Loss: 3.6540803694725037
Epoch: 7/16 Loss: 3.567286328792572
Epoch: 7/16 Loss: 3.5195066528320313
Epoch: 7/16 Loss: 3.5390851211547854
Epoch: 7/16 Loss: 3.484873257160187
Epoch: 7/16 Loss: 3.4795094430446625
Epoch: 7/16 Loss: 3.4869506392478944
Epoch: 7/16 Loss: 3.5113028059005735
Epoch: 7/16 Loss: 3.4337530336380007
Epoch: 7/16 Loss: 3.576595564365387
Epoch: 7/16 Loss: 3.617871961593628
Epoch: 7/16 Loss: 3.665402742385864
Epoch: 7/16 Loss: 3.6384147148132326
Epoch: 7/16 Loss: 3.625136462688446
Epoch: 7/16 Loss: 3.666977719783783
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:**The first thing I tried to do is set the sequence length to a large number, 150, but that didn't seem to do a great job at converging the loss. I then set the sequence length to 10 and it did a great job. As for the hidden_dim, I set it to 256 and it seemed to do a good job so I stuck with it. As for the number of layers, 2 layers seem to give lower loss values than 1 layer. It would be worth it to try 3 layers and see how the loss converges. For the number of epochs, I first tried 3 to check if the loss is converging or not, and after that I tried to target a loss lower than 3.5 by setting the number of epochs to 16. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/home/shaarany/anaconda3/envs/pytorchenv/lib/python3.6/site-packages/ipykernel_launcher.py:46: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.54424029368
()
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
import numpy as np
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
counts = Counter(text)
sorted_vocab = sorted(counts, key=counts.get, reverse=True)
# create int_to_vocab dictionaries
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token_dict = dict()
token_dict["."] = "||period||"
token_dict[","] = "||comma||"
token_dict["\""] = "||quotationmark||"
token_dict[";"] = "||semicolon||"
token_dict["!"] = "||exclamationmark||"
token_dict["?"] = "||questionmark||"
token_dict["("] = "||lparentheses||"
token_dict[")"] = "||rparentheses||"
token_dict["-"] = "||dash||"
token_dict["\n"] = "||return||"
return token_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
No GPU found. Please use a GPU to train your neural network.
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
import numpy as np
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
# TODO: Implement function
features, targets = [], []
for idx in range(0, (len(words) - sequence_length) ):
features.append(words[idx : idx + sequence_length])
targets.append(words[idx + sequence_length])
#print(features)
#print(targets)
data = TensorDataset(torch.from_numpy(np.asarray(features)), torch.from_numpy(np.asarray(targets)))
data_loader = torch.utils.data.DataLoader(data, shuffle=False , batch_size = batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]])
()
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define model layers
# embedding and LSTM layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
# linear layer
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
# embeddings and lstm_out
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
output = self.fc(lstm_out)
# reshape to be batch_size first
output = output.view(batch_size, -1, self.output_size)
out = output[:, -1] # get last batch of labels
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if(train_on_gpu):
rnn.cuda()
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
h = tuple([each.data for each in hidden])
# zero accumulated gradients
rnn.zero_grad()
if(train_on_gpu):
inp, target = inp.cuda(), target.cuda()
# get the output from the model
output, h = rnn(inp, h)
# perform backpropagation and optimization
# calculate the loss and perform backprop
loss = criterion(output, target)
loss.backward()
# 'clip_grad_norm' helps prevent the exploding gradient problem in RNNs / LSTMs
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), h
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 5.57906087255
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) Based on the course material regarding embedding, I have selected my model hyperparameters:- sequence_length: I have tried 10, 20 and finaaly sequence length = 10 and batch size = 128 converge faster.- batch_size: I have tried 64, 128 and 256.- num_epochs: I have set it to 10 that is enough.- learning_rate: I have started from 0.001.- embedding_dim: that typical embedding dimensions are around 200 - 500 in size. I have tried 200, 300 and 400 and finnaly I set it to 200 since out inputs are 20K.- hidden_dim: I have set it ro 256that it is more that embedding_dim.- n_layers: It could be 2 or 3. I set it 2 layers. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
_____no_output_____
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
_____no_output_____
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
# return tuple
return (None, None)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
_____no_output_____
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
_____no_output_____
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word":```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# return a dataloader
return None
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of words** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output one, next word.
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
# define model layers
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# return one batch of output word scores and the hidden state
return None, None
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
_____no_output_____
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
# perform backpropagation and optimization
# return the loss over a batch and the hidden state produced by our model
return None, None
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
_____no_output_____
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = # of words in a sequence
# Batch Size
batch_size =
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs =
# Learning Rate
learning_rate =
# Model parameters
# Vocab size
vocab_size =
# Output size
output_size =
# Embedding Dimension
embedding_dim =
# Hidden Dimension
hidden_dim =
# Number of RNN Layers
n_layers =
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
_____no_output_____
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat it's predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the index of the most likely next word
top_i = torch.multinomial(output.exp().data, 1).item()
# retrieve that word from the dictionary
word = int_to_vocab[top_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = top_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
_____no_output_____
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 20)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 20:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
jerry: oh, you dont recall?
george: (on an imaginary microphone) uh, no, not at this time.
jerry: well, senator, id just like to know, what you knew and when you knew it.
claire: mr. seinfeld. mr. costanza.
george: are, are you sure this is decaf? wheres the orange indicator?
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
vocab_counter = Counter(text)
vocab_sorted = sorted(vocab_counter, key = vocab_counter.get, reverse = True)
int_to_vocab = {w: word for w, word in enumerate(vocab_sorted)}
vocab_to_int = {word: w for w, word in int_to_vocab.items()}
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
from string import punctuation
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
token_dict = {
'.': '||period||',
',': '||comma||',
'"': '||quotation_mark||',
';': '||semicolon||',
'!': '||exclamation_mark||',
'?': '||question_mark||',
'(': '||left_parentheses||',
')': '||right_parantheses||',
'-': '||dash||',
'\n': '||return||'
}
return token_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
int_text[:10]
' '.join([int_to_vocab[i] for i in int_text[:10]])
dict(list(int_to_vocab.items())[:10])
dict(list(vocab_to_int.items())[:10])
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
_____no_output_____
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
# return tuple
return (None, None)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
_____no_output_____
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
_____no_output_____
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word":```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# return a dataloader
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_batch_data(batch_data)
###Output
_____no_output_____
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of words** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output one, next word.
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
# define model layers
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# return one batch of output word scores and the hidden state
return None, None
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
_____no_output_____
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
# perform backpropagation and optimization
# return the loss over a batch and the hidden state produced by our model
return None, None
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
_____no_output_____
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = # of words in a sequence
# Batch Size
batch_size =
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs =
# Learning Rate
learning_rate =
# Model parameters
# Vocab size
vocab_size =
# Output size
output_size =
# Embedding Dimension
embedding_dim =
# Hidden Dimension
hidden_dim =
# Number of RNN Layers
n_layers =
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
_____no_output_____
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat it's predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the index of the most likely next word
top_i = torch.multinomial(output.exp().data, 1).item()
# retrieve that word from the dictionary
word = int_to_vocab[top_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = top_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
_____no_output_____
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
import inspect
import re
def describe(arg):
frame = inspect.currentframe()
callerframeinfo = inspect.getframeinfo(frame.f_back)
try:
context = inspect.getframeinfo(frame.f_back).code_context
caller_lines = ''.join([line.strip() for line in context])
m = re.search(r'describe\s*\((.+?)\)$', caller_lines)
if m:
caller_lines = m.group(1)
position = str(callerframeinfo.filename) + "@" + str(callerframeinfo.lineno)
# Add additional info such as array shape or string length
additional = ''
if hasattr(arg, "shape"):
additional += "[shape={}]".format(arg.shape)
elif hasattr(arg, "__len__"): # shape includes length information
additional += "[len={}]".format(len(arg))
# Use str() representation if it is printable
str_arg = str(arg)
str_arg = str_arg if str_arg.isprintable() else repr(arg)
print(position, "describe(" + caller_lines + ") = ", end='')
print(arg.__class__.__name__ + "(" + str_arg + ")", additional)
else:
print("Describe: couldn't find caller context")
finally:
del frame
del callerframeinfo
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
text = set(text) # remove duplicates
vocab_to_int, int_to_vocab = {}, {}
for index, word in enumerate(text):
vocab_to_int[word] = index
int_to_vocab[index] = word
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return {'.': '||Period||',
',': '||Comma||',
'"': '||Quotation_Mark||',
';': '||Semicolon||',
'!': '||Exclamation_Mark||',
'?': '||Question_Mark||',
'(': '||Left_Parentheses||',
')': '||Right_Parentheses||',
'-': '||Dash||',
'\n':'||Return||'}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
device = 'cuda' if train_on_gpu else 'cpu'
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
torch.manual_seed(0) # Have dataloader shuffle be reproducable
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
assert batch_size > 0, "Batch size not positive"
assert sequence_length > 0, "Sequence length not positive"
assert sequence_length < len(words), "Sequence length too long"
n_sequences = len(words) - sequence_length
sequences, targets = [], []
for start_idx in range(n_sequences):
target_idx = start_idx + sequence_length
sequences.append(words[start_idx:target_idx])
targets.append(words[target_idx])
dataset = TensorDataset(torch.tensor(sequences), torch.tensor(targets))
dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, shuffle=True)
return dataloader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
words = range(0,8)
dataloader = batch_data(words, sequence_length=3, batch_size=2)
for seqs, targets in dataloader:
for i in range(len(targets)):
print("%s -> %s" % (seqs[i].tolist(), targets[i].tolist()))
print("End of batch")
###Output
[4, 5, 6] -> 7
[0, 1, 2] -> 3
End of batch
[1, 2, 3] -> 4
[3, 4, 5] -> 6
End of batch
[2, 3, 4] -> 5
End of batch
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 43, 44, 45, 46, 47],
[ 4, 5, 6, 7, 8],
[ 37, 38, 39, 40, 41],
[ 34, 35, 36, 37, 38],
[ 16, 17, 18, 19, 20],
[ 8, 9, 10, 11, 12],
[ 44, 45, 46, 47, 48],
[ 27, 28, 29, 30, 31],
[ 31, 32, 33, 34, 35],
[ 12, 13, 14, 15, 16]])
torch.Size([10])
tensor([ 48, 9, 42, 39, 21, 13, 49, 32, 36, 17])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super().__init__()
# Save initializer parameters
for param in ("vocab_size output_size embedding_dim hidden_dim n_layers dropout".split()):
exec(f"self.{param} = {param}")
# https://pytorch.org/docs/stable/nn.html#embedding
self.embedding = nn.Embedding(vocab_size, embedding_dim)
# https://pytorch.org/docs/stable/nn.html#torch.nn.GRU
# batch_first=True gives input and output of shape (batch, seq, feature)
self.lstm = nn.LSTM(input_size=embedding_dim, hidden_size=hidden_dim, num_layers=n_layers,
dropout=dropout, batch_first=True)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
embed = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embed, hidden)
# Select only the final embeddings from dimension 1 (sequences)
seq_len = nn_input.shape[1]
seq_finals = lstm_out.select(1, seq_len-1).contiguous()
fc_out = self.fc(seq_finals)
return fc_out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
initial = [torch.zeros(self.n_layers, batch_size, self.hidden_dim).to(device) for count in range(2)]
return initial
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# Duplicate initial hidden state to prevent back prop through whole training history
hidden = tuple([each.data for each in hidden])
optimizer.zero_grad()
output, hidden = rnn(inp.to(device), hidden)
loss = criterion(output, target.to(device))
loss.backward()
# Prevent exploding gradients
nn.utils.clip_grad_norm_(rnn.parameters(), 10)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
I only updated the code to store the displayed losses
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
# global losses # Preserve values on keyboard interrupt
rnn.losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# if batch_i == 300:
# break
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4}, step: {:>4}/{:<4} Loss: {}'.format(
epoch_i, n_epochs, batch_i, len(train_loader), np.average(batch_losses)))
rnn.losses.append(np.average(batch_losses))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 15 # of words in a sequence
# Batch Size
batch_size = 96
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 15
# Learning Rate
learning_rate = 0.0001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 50
# Hidden Dimension
hidden_dim = 400
# Number of RNN Layers
n_layers = 1
# Show stats for every n number of batches
show_every_n_batches = 1000
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
torch.manual_seed(0) # Make things reproducable
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
from workspace_utils import active_session
with active_session():
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
import matplotlib.pyplot as plt
steps_printed_at = [each * show_every_n_batches for each in range(len(rnn.losses))]
plt.plot(steps_printed_at, rnn.losses)
plt.ylabel("Loss")
plt.ylim(ymax=5)
plt.xlabel("Steps")
plt.show()
###Output
_____no_output_____
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** Initial parametersI chose embeddings of 50 dimensions since [The Word2Vec Wikipedia article](https://en.wikipedia.org/wiki/Word2vecParameters_and_model_quality) says that 50 is a good size for an embdding.Initially slected hyper parameters were: batch_size = 32 seq_length = 10 num_epochs = 1 learning_rate = 0.001 embedding_dim = 50 n_layers = 1I ran the model for 2000 steps and recorded the loss with the following hidden layer sizes:| Hidden units | Loss | | ------| -----|50 | 4.992100 | 4.8651200 | 4.781500| 4.712I fixed on 150 hidden neurons, then proceeded to look at the effect of LSTM layers when training for 3400 steps:| Hidden units | Layer(s) | Loss after 3400 steps| | ------| -----|----|150 | 1 | 4.620393567085266150 | 2 | 4.818604485988617I chose 1 hidden layer given the quicker loss reduction.I chose the following learning rates (listed in the order chosen) and achieved the associated losses:| Learning rate || Loss after 3000 steps| | ------| -----|0.0001 | 5.3223231410980220.001 | 4.6924338114261630.01 | 4.7049010920524590.005 | 4.553797644376755[This article](https://en.wikipedia.org/wiki/Word2vecParameters_and_model_quality) suggested a sequence length of 15, so I chose that value.I increased the batch size to 96 to have gradient descent wander around less.I figured that this was sufficient tuning at this stage, and so proceeded to train the model with these parameters.----After training, I realised that the loss wasn't reducing after 6 epochs, getting stuck at about 3.4.*At this stage, the requirements were satisifed, but I decided to keep tuning.*I guessed that the likely reason that the loss was higher with 2 layers was a more complex model, and it seemed that my model was too simple to learn more. Therefore, I increased the number of layers to 2.With 2 layers, this **increased** the loss to 3.8 which stayed stable after about 9 epochs. This disproved my prevoius hypothesis.I then returned the number of layers to 1, and instead increased the hidden layer to 400.I also decreased the learning rate to 0.0001 hoping that a decreased learning rate will bounce around less with smaller steps and find a better minima.This reduced the loss to about 3.2. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
# Avoid warning:
# /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:40:
# UserWarning: RNN module weights are not part of single contiguous chunk of memory.
# This means they need to be compacted at every call, possibly greatly increasing memory usage.
# To compact weights again call flatten_parameters().
trained_rnn.lstm.flatten_parameters()
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
# prime_word = 'jerry' # name for starting the script
# It seems I'd need to do an initial sequence of "master of your domain" rather than a single word...
prime_word = 'master'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
# generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
generated_script = generate(trained_rnn, vocab_to_int[prime_word], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
master.
george: yeah, i know...
jerry: what?
kramer: well, you know, i mean, the only thing that i was just a little, you know, i think you can have to be a little bit about this.
kramer: oh.(to jerry) you know, i just remembered my name.
george: oh!(she leaves)
jerry:(pause) yeah!
elaine:(sighs) what?
jerry: the--- i- it's not a pig- a- reeno.
kramer: hey!
jerry: hey! how did you know?
kramer: i don't know, i don't know what to do. you know, i don't have to get it.
george: oh, come on.(to jerry) i told you to get that thing.
frank:(to george) so what?
elaine: i was in the shower..
elaine:(to jerry) hey, hey, hey, hey, hey. hey, hey! hey! you gotta get some sleep here!(grabs his coat) i don't want you to have it.
george: oh, no.
jerry: what?
george: you know what, you think you should have to do this...?
kramer:(to jerry) yeah, i'm gonna have to...(to kramer) hey, hey, hey, how was this?(points to a look at a man and he goes to his face and he was just trying to make a big salad.
george: i can't.
elaine:(laughs) well, i don't have to be a little nervous.
jerry: what is it?
kramer: it's a hundred dollars.
jerry: well, what about the difference?
elaine: oh, no, no...(jerry looks at george and looks at the other room)
george: what about the wedding
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(vocab):
"""
Create lookup tables for vocabulary
:param vocab: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
word_frequency = Counter(vocab)
vocab_sorted = sorted(word_frequency, key=word_frequency.get, reverse=True)
int_to_vocab = {idx: word for idx, word in enumerate(vocab_sorted)}
vocab_to_int = {word: idx for idx, word in int_to_vocab.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
tokens = dict()
tokens['.'] = '||PERIOD||'
tokens[','] = '||COMMA||'
tokens['"'] = '||QUOTATION_MARK||'
tokens[';'] = '||SEMICOLON||'
tokens['!'] = '||EXCLAMATION_MARK||'
tokens['?'] = '||QUESTION_MARK||'
tokens['('] = '||LEFT_PAREN||'
tokens[')'] = '||RIGHT_PAREN||'
tokens['?'] = '||QUESTION_MARK||'
tokens['-'] = '||DASH||'
tokens['\n'] = '||NEW_LINE||'
return tokens
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
n_batches = len(words)//batch_size
words = words[:n_batches*batch_size]
y_len = len(words) - sequence_length
x, y = [], []
for idx in range(0, y_len):
end_idx = idx + sequence_length
x_batch = words[idx:end_idx]
y_batch = words[end_idx]
x.append(x_batch)
y.append(y_batch)
# create Tensor datasets
data = TensorDataset(torch.from_numpy(np.asarray(x)), torch.from_numpy(np.asarray(y)))
# return a dataloader
return DataLoader(data, shuffle=True, batch_size=batch_size)
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 12, 13, 14, 15, 16],
[ 16, 17, 18, 19, 20],
[ 22, 23, 24, 25, 26],
[ 27, 28, 29, 30, 31],
[ 0, 1, 2, 3, 4],
[ 19, 20, 21, 22, 23],
[ 32, 33, 34, 35, 36],
[ 42, 43, 44, 45, 46],
[ 13, 14, 15, 16, 17],
[ 15, 16, 17, 18, 19]])
torch.Size([10])
tensor([ 17, 21, 27, 32, 5, 24, 37, 47, 18, 20])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define model layers
## Embedding layer
self.embedding = nn.Embedding(vocab_size, embedding_dim)
## LSTM
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
## Fully Connected Output Layer
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
batch_size = nn_input.size(0)
embedded_input = self.embedding(nn_input)
lstm_output, hidden = self.lstm(embedded_input, hidden)
# stack up lstm outputs
lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)
output = self.fc(lstm_output)
# reshape into (batch_size, seq_length, output_size)
output = output.view(batch_size, -1, self.output_size)
# get last batch
output = output[:, -1]
# return one batch of output word scores and the hidden state
return output, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if(train_on_gpu):
rnn.cuda()
inp, target = inp.cuda(), target.cuda()
# Creating new variables for the hidden state, otherwise
## we'd backprop through the entire training history
h = tuple([each.data for each in hidden])
# zero accumulated gradients
rnn.zero_grad()
# get predicted outputs
output, h = rnn(inp, h)
# calculate loss
loss = criterion(output, target)
# perform backpropagation and optimization
loss.backward()
# 'clip_grad_norm' helps prevent the exploding gradient problem in RNNs / LSTMs
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), h
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 12 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 100
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 5.689335102081299
Epoch: 1/10 Loss: 4.990469039916992
Epoch: 1/10 Loss: 4.762852001190185
Epoch: 1/10 Loss: 4.638548399925232
Epoch: 1/10 Loss: 4.520133486747742
Epoch: 1/10 Loss: 4.442440998077393
Epoch: 1/10 Loss: 4.406807768344879
Epoch: 1/10 Loss: 4.353310613155365
Epoch: 1/10 Loss: 4.306007277488709
Epoch: 1/10 Loss: 4.28541527891159
Epoch: 1/10 Loss: 4.252699371814728
Epoch: 1/10 Loss: 4.242565767288208
Epoch: 1/10 Loss: 4.211694393634796
Epoch: 2/10 Loss: 4.09004413367303
Epoch: 2/10 Loss: 4.012842516899109
Epoch: 2/10 Loss: 4.003110827445984
Epoch: 2/10 Loss: 3.995257466316223
Epoch: 2/10 Loss: 3.9906706051826477
Epoch: 2/10 Loss: 3.991211027622223
Epoch: 2/10 Loss: 3.9961580533981325
Epoch: 2/10 Loss: 3.9808934264183042
Epoch: 2/10 Loss: 3.9436436290740966
Epoch: 2/10 Loss: 3.9623371529579163
Epoch: 2/10 Loss: 3.945345802783966
Epoch: 2/10 Loss: 3.9629678020477295
Epoch: 2/10 Loss: 3.9369547820091246
Epoch: 3/10 Loss: 3.855660996900117
Epoch: 3/10 Loss: 3.7924579558372495
Epoch: 3/10 Loss: 3.7916239104270937
Epoch: 3/10 Loss: 3.78401593208313
Epoch: 3/10 Loss: 3.781112868309021
Epoch: 3/10 Loss: 3.7864360413551332
Epoch: 3/10 Loss: 3.7707386717796325
Epoch: 3/10 Loss: 3.7857625164985658
Epoch: 3/10 Loss: 3.77382771396637
Epoch: 3/10 Loss: 3.7824574990272524
Epoch: 3/10 Loss: 3.7858978509902954
Epoch: 3/10 Loss: 3.779183482170105
Epoch: 3/10 Loss: 3.8029033913612365
Epoch: 4/10 Loss: 3.7201276404305923
Epoch: 4/10 Loss: 3.640900936126709
Epoch: 4/10 Loss: 3.6390393896102906
Epoch: 4/10 Loss: 3.651479829788208
Epoch: 4/10 Loss: 3.6736504735946656
Epoch: 4/10 Loss: 3.6317503776550293
Epoch: 4/10 Loss: 3.6692456407546996
Epoch: 4/10 Loss: 3.6592721338272094
Epoch: 4/10 Loss: 3.6719485473632814
Epoch: 4/10 Loss: 3.6794757652282715
Epoch: 4/10 Loss: 3.658661730289459
Epoch: 4/10 Loss: 3.685721179962158
Epoch: 4/10 Loss: 3.7057959332466126
Epoch: 5/10 Loss: 3.6109846699828942
Epoch: 5/10 Loss: 3.543990177631378
Epoch: 5/10 Loss: 3.5439520463943484
Epoch: 5/10 Loss: 3.5543925104141234
Epoch: 5/10 Loss: 3.5421084151268007
Epoch: 5/10 Loss: 3.5595854954719544
Epoch: 5/10 Loss: 3.5671810064315794
Epoch: 5/10 Loss: 3.5933289227485656
Epoch: 5/10 Loss: 3.583650695323944
Epoch: 5/10 Loss: 3.5925776000022887
Epoch: 5/10 Loss: 3.5935501232147216
Epoch: 5/10 Loss: 3.5848735785484314
Epoch: 5/10 Loss: 3.600077327251434
Epoch: 6/10 Loss: 3.5245057672015894
Epoch: 6/10 Loss: 3.4567282438278197
Epoch: 6/10 Loss: 3.4492290363311766
Epoch: 6/10 Loss: 3.4556948804855345
Epoch: 6/10 Loss: 3.4758810696601867
Epoch: 6/10 Loss: 3.485396713733673
Epoch: 6/10 Loss: 3.5147147693634033
Epoch: 6/10 Loss: 3.5213922600746157
Epoch: 6/10 Loss: 3.5079230608940124
Epoch: 6/10 Loss: 3.5110500435829164
Epoch: 6/10 Loss: 3.540863757133484
Epoch: 6/10 Loss: 3.5183197450637818
Epoch: 6/10 Loss: 3.530316883087158
Epoch: 7/10 Loss: 3.47198021707456
Epoch: 7/10 Loss: 3.3885739154815675
Epoch: 7/10 Loss: 3.3907443442344665
Epoch: 7/10 Loss: 3.404612669944763
Epoch: 7/10 Loss: 3.4167795372009278
Epoch: 7/10 Loss: 3.429683964252472
Epoch: 7/10 Loss: 3.4322836065292357
Epoch: 7/10 Loss: 3.4292672295570372
Epoch: 7/10 Loss: 3.4561973814964295
Epoch: 7/10 Loss: 3.466958176612854
Epoch: 7/10 Loss: 3.457442395210266
Epoch: 7/10 Loss: 3.469754644870758
Epoch: 7/10 Loss: 3.4851392683982847
Epoch: 8/10 Loss: 3.414517692051643
Epoch: 8/10 Loss: 3.3394555287361145
Epoch: 8/10 Loss: 3.3458389449119568
Epoch: 8/10 Loss: 3.3648141913414
Epoch: 8/10 Loss: 3.3736281752586366
Epoch: 8/10 Loss: 3.3770995969772337
Epoch: 8/10 Loss: 3.3871143264770507
Epoch: 8/10 Loss: 3.4066839089393617
Epoch: 8/10 Loss: 3.386567701816559
Epoch: 8/10 Loss: 3.415872033119202
Epoch: 8/10 Loss: 3.402440727710724
Epoch: 8/10 Loss: 3.4310655674934387
Epoch: 8/10 Loss: 3.431197931289673
Epoch: 9/10 Loss: 3.3661745737406834
Epoch: 9/10 Loss: 3.291727011680603
Epoch: 9/10 Loss: 3.2933093738555907
Epoch: 9/10 Loss: 3.3234049105644226
Epoch: 9/10 Loss: 3.3226202754974365
Epoch: 9/10 Loss: 3.33066694355011
Epoch: 9/10 Loss: 3.3359247121810913
Epoch: 9/10 Loss: 3.3585346493721007
Epoch: 9/10 Loss: 3.3688218941688537
Epoch: 9/10 Loss: 3.365759085178375
Epoch: 9/10 Loss: 3.390573311328888
Epoch: 9/10 Loss: 3.3818308234214784
Epoch: 9/10 Loss: 3.4139123697280884
Epoch: 10/10 Loss: 3.316983445370493
Epoch: 10/10 Loss: 3.2662136254310608
Epoch: 10/10 Loss: 3.2569780564308166
Epoch: 10/10 Loss: 3.2723226146698
Epoch: 10/10 Loss: 3.2950291895866393
Epoch: 10/10 Loss: 3.287660878658295
Epoch: 10/10 Loss: 3.328598433971405
Epoch: 10/10 Loss: 3.3206465549468995
Epoch: 10/10 Loss: 3.343081605434418
Epoch: 10/10 Loss: 3.3318973751068115
Epoch: 10/10 Loss: 3.3310949902534484
Epoch: 10/10 Loss: 3.35181494474411
Epoch: 10/10 Loss: 3.3633791499137877
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** I tried with sequence_lenghts = 100,50,25,10,15,12. The earlier 3 options did not converge efficiently and found 12 to be the best among other three. From the experience gained from previous exercises in the course, I found that setting n_layers=2 works efficiently in terms of model convergence and time to train. Likewise, I experimented with hidden_dim=128, 256, 512. I did not find the parameter value 512 any more better than 256; while setting hidden_dim=128 gave me higher loss than 256. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:43: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# Set an id for each word (word to id)
vocab_to_int={word: idx for idx, word in enumerate(set(text))}
# For each id obtain the associates word (id to word)
int_to_vocab={value : key for (key, value) in vocab_to_int.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
dict_punctuation={'.':'|Period|', ',':'|Comma|',
'"':'|Quotation|',';':'|Semicolon|',
'!':'|Exclamation|','?':'|Question|',
'(':'|Left_Parentheses|',')':'|Right_Parentheses|',
'-':'|Dash|','\n':'|Return|'}
return dict_punctuation
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check Point 1This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
No GPU found. Please use a GPU to train your neural network.
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# Create a list with the target variable
target=words[sequence_length:]
# Create a list of list with the features (each sublist is a feature)
features=[words[idx:(idx+sequence_length)] for idx in range(0, len(words)-sequence_length)]
# Create a tensor dataset
data = TensorDataset(torch.tensor(features), torch.tensor(target))
# Create a data loader object
data_loader = torch.utils.data.DataLoader(data, shuffle=True,
batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
# The function seems to work smoothly
test=batch_data(int_text[:8], 5, 1)
print('The input array is {}\n =============='.format(np.transpose(int_text[:8])))
for text, target in test:
print('The generated data is\n {},\n and the target is\n {}\n........................'.format(np.array(text),int(target)))
###Output
The input array is [13592 4847 7508 2726 2726 2726 1852 7508]
==============
The generated data is
[[7508 2726 2726 2726 1852]],
and the target is
7508
........................
The generated data is
[[4847 7508 2726 2726 2726]],
and the target is
1852
........................
The generated data is
[[13592 4847 7508 2726 2726]],
and the target is
2726
........................
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 40, 41, 42, 43, 44],
[ 10, 11, 12, 13, 14],
[ 0, 1, 2, 3, 4],
[ 32, 33, 34, 35, 36],
[ 9, 10, 11, 12, 13],
[ 41, 42, 43, 44, 45],
[ 22, 23, 24, 25, 26],
[ 13, 14, 15, 16, 17],
[ 39, 40, 41, 42, 43],
[ 5, 6, 7, 8, 9]])
torch.Size([10])
tensor([ 45, 15, 5, 37, 14, 46, 27, 18, 44, 10])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.1):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# Set the parmeters
self.n_layers=n_layers
self.hidden_dim=hidden_dim
self.output_size=output_size
self.vocab_size=vocab_size
#
self.embedding = nn.Embedding(vocab_size, embedding_dim)
# set class variable
# define model layers
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,dropout=dropout,batch_first=True)
self.dropout = nn.Dropout(dropout)
self.hidden2tag = nn.Linear(hidden_dim, output_size)
#self.sigmoid = nn.Sigmoid()
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size=nn_input.size(0)
# embeddings and lstm_out
embeds = self.embedding(nn_input)
output, hiden = self.lstm(embeds,hidden)
# Dropout ------------
output = self.dropout(output)
# Shape output
output = output.contiguous().view(-1, self.hidden_dim)
# Final output
output = self.hidden2tag(output)
# sigmoid function
#sig_out = self.sigmoid(output)
# reshape into (batch_size, seq_length, output_size)
output = output.view(batch_size, -1, self.output_size)
# get last batch
out = output[:, -1]
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# zero gradients
rnn.zero_grad()
#print('Input forward_back_prop: {}'.format(inp))
# move data to GPU, if available ##
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if(train_on_gpu):
inp, target = inp.cuda(), target.cuda() # inputs and target to CUDA
h = tuple([each.data for each in hidden])
# Forward propagation (return prediction and hidden)
prediction, hidden = rnn(inp, h)
# Calculate the loss and perform backpropagation and optimization
loss = criterion(prediction,target)
loss.backward()
# Optimizer step
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 6 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 15
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int.keys())
# Output size
output_size = len(vocab_to_int.keys())
# Embedding Dimension
embedding_dim = 100#int(vocab_size**0.25 )
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# I commented these lines of codes to avoid get into the training process again.
"""
from workspace_utils import active_session
with active_session():
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.1)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
torch.save(trained_rnn.state_dict(), 'rnn_model.pt')
#helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
"""
###Output
Training for 15 epoch(s)...
Epoch: 1/15 Loss: 5.663258090019226
Epoch: 1/15 Loss: 4.974978388309479
Epoch: 1/15 Loss: 4.731637468338013
Epoch: 1/15 Loss: 4.612092967987061
Epoch: 1/15 Loss: 4.504213795185089
Epoch: 1/15 Loss: 4.417986435890198
Epoch: 1/15 Loss: 4.370470092773438
Epoch: 1/15 Loss: 4.317675413131714
Epoch: 1/15 Loss: 4.294067976951599
Epoch: 1/15 Loss: 4.241216436386108
Epoch: 1/15 Loss: 4.214857961654663
Epoch: 1/15 Loss: 4.190512034416199
Epoch: 1/15 Loss: 4.148846408367157
Epoch: 2/15 Loss: 4.053176667299064
Epoch: 2/15 Loss: 3.9895072927474975
Epoch: 2/15 Loss: 3.9703796162605287
Epoch: 2/15 Loss: 3.9970931606292726
Epoch: 2/15 Loss: 3.9731620512008665
Epoch: 2/15 Loss: 3.9681954469680787
Epoch: 2/15 Loss: 3.9488897376060486
Epoch: 2/15 Loss: 3.9478984870910643
Epoch: 2/15 Loss: 3.9462607555389404
Epoch: 2/15 Loss: 3.925812830924988
Epoch: 2/15 Loss: 3.951374550819397
Epoch: 2/15 Loss: 3.9212351932525635
Epoch: 2/15 Loss: 3.9174063687324523
Epoch: 3/15 Loss: 3.8200453028959385
Epoch: 3/15 Loss: 3.7386282682418823
Epoch: 3/15 Loss: 3.7579594388008117
Epoch: 3/15 Loss: 3.7462767906188965
Epoch: 3/15 Loss: 3.7273158679008485
Epoch: 3/15 Loss: 3.7300032291412353
Epoch: 3/15 Loss: 3.7327031984329224
Epoch: 3/15 Loss: 3.768801794528961
Epoch: 3/15 Loss: 3.757462275028229
Epoch: 3/15 Loss: 3.7550300965309145
Epoch: 3/15 Loss: 3.7617292289733886
Epoch: 3/15 Loss: 3.7689134130477906
Epoch: 3/15 Loss: 3.7620597701072693
Epoch: 4/15 Loss: 3.651756169011103
Epoch: 4/15 Loss: 3.580344995498657
Epoch: 4/15 Loss: 3.5716244969367983
Epoch: 4/15 Loss: 3.5850346388816834
Epoch: 4/15 Loss: 3.5969847054481505
Epoch: 4/15 Loss: 3.61905899477005
Epoch: 4/15 Loss: 3.598149802684784
Epoch: 4/15 Loss: 3.625283618450165
Epoch: 4/15 Loss: 3.6171423163414
Epoch: 4/15 Loss: 3.637788908481598
Epoch: 4/15 Loss: 3.6303118286132814
Epoch: 4/15 Loss: 3.6185184020996095
Epoch: 4/15 Loss: 3.6318580222129824
Epoch: 5/15 Loss: 3.533268683712057
Epoch: 5/15 Loss: 3.443302219390869
Epoch: 5/15 Loss: 3.448633542537689
Epoch: 5/15 Loss: 3.4698262400627136
Epoch: 5/15 Loss: 3.4839738097190858
Epoch: 5/15 Loss: 3.483365166187286
Epoch: 5/15 Loss: 3.493470165729523
Epoch: 5/15 Loss: 3.499399845600128
Epoch: 5/15 Loss: 3.505293409347534
Epoch: 5/15 Loss: 3.5149912037849425
Epoch: 5/15 Loss: 3.522236449241638
Epoch: 5/15 Loss: 3.529040623664856
Epoch: 5/15 Loss: 3.544809880256653
Epoch: 6/15 Loss: 3.426417585742978
Epoch: 6/15 Loss: 3.3352773170471193
Epoch: 6/15 Loss: 3.3538643884658814
Epoch: 6/15 Loss: 3.3326404581069946
Epoch: 6/15 Loss: 3.3662939414978026
Epoch: 6/15 Loss: 3.388433437347412
Epoch: 6/15 Loss: 3.397740466594696
Epoch: 6/15 Loss: 3.3960886330604554
Epoch: 6/15 Loss: 3.4134975595474244
Epoch: 6/15 Loss: 3.4226679968833924
Epoch: 6/15 Loss: 3.431689368247986
Epoch: 6/15 Loss: 3.4431109986305235
Epoch: 6/15 Loss: 3.457769995689392
Epoch: 7/15 Loss: 3.3366429921397236
Epoch: 7/15 Loss: 3.2163876152038573
Epoch: 7/15 Loss: 3.263990321159363
Epoch: 7/15 Loss: 3.2836737785339354
Epoch: 7/15 Loss: 3.2897709798812866
Epoch: 7/15 Loss: 3.2887202916145326
Epoch: 7/15 Loss: 3.3207587175369264
Epoch: 7/15 Loss: 3.3121140356063843
Epoch: 7/15 Loss: 3.3443190593719483
Epoch: 7/15 Loss: 3.354572840690613
Epoch: 7/15 Loss: 3.3512333822250366
Epoch: 7/15 Loss: 3.3585531215667723
Epoch: 7/15 Loss: 3.3638349785804746
Epoch: 8/15 Loss: 3.2458772403413914
Epoch: 8/15 Loss: 3.1732354836463927
Epoch: 8/15 Loss: 3.194883086681366
Epoch: 8/15 Loss: 3.1934889554977417
Epoch: 8/15 Loss: 3.2077871503829956
Epoch: 8/15 Loss: 3.213690211772919
Epoch: 8/15 Loss: 3.227969316482544
Epoch: 8/15 Loss: 3.2649383850097657
Epoch: 8/15 Loss: 3.2624859008789064
Epoch: 8/15 Loss: 3.2705423016548156
Epoch: 8/15 Loss: 3.290566128730774
Epoch: 8/15 Loss: 3.3023590745925904
Epoch: 8/15 Loss: 3.3002757964134215
Epoch: 9/15 Loss: 3.180394997660713
Epoch: 9/15 Loss: 3.0846273612976076
Epoch: 9/15 Loss: 3.1130513925552368
Epoch: 9/15 Loss: 3.132577859401703
Epoch: 9/15 Loss: 3.13539009141922
Epoch: 9/15 Loss: 3.17213000869751
Epoch: 9/15 Loss: 3.161185974597931
Epoch: 9/15 Loss: 3.1835865440368654
Epoch: 9/15 Loss: 3.183850060939789
Epoch: 9/15 Loss: 3.2200051379203796
Epoch: 9/15 Loss: 3.2231619257926942
Epoch: 9/15 Loss: 3.245440628528595
Epoch: 9/15 Loss: 3.251837821960449
Epoch: 10/15 Loss: 3.1343552456059567
Epoch: 10/15 Loss: 3.021392032146454
Epoch: 10/15 Loss: 3.0498223094940187
Epoch: 10/15 Loss: 3.0733331441879272
Epoch: 10/15 Loss: 3.0837340478897093
Epoch: 10/15 Loss: 3.120856162071228
Epoch: 10/15 Loss: 3.1049691095352174
Epoch: 10/15 Loss: 3.128293273448944
Epoch: 10/15 Loss: 3.1414620447158814
Epoch: 10/15 Loss: 3.1498056592941284
Epoch: 10/15 Loss: 3.1692037019729615
Epoch: 10/15 Loss: 3.1892218647003174
Epoch: 10/15 Loss: 3.19181530046463
Epoch: 11/15 Loss: 3.0655889378243555
Epoch: 11/15 Loss: 2.9849990825653077
Epoch: 11/15 Loss: 2.989135643482208
Epoch: 11/15 Loss: 3.0097830567359924
Epoch: 11/15 Loss: 3.032475079059601
Epoch: 11/15 Loss: 3.049156584739685
Epoch: 11/15 Loss: 3.0724888558387757
Epoch: 11/15 Loss: 3.0932924423217774
Epoch: 11/15 Loss: 3.09987087392807
Epoch: 11/15 Loss: 3.0845982084274293
Epoch: 11/15 Loss: 3.138016402721405
Epoch: 11/15 Loss: 3.1391333870887754
Epoch: 11/15 Loss: 3.1453376269340514
Epoch: 12/15 Loss: 3.0292437544056012
Epoch: 12/15 Loss: 2.9214587507247924
Epoch: 12/15 Loss: 2.9582755999565125
Epoch: 12/15 Loss: 2.96631263589859
Epoch: 12/15 Loss: 2.9855646600723267
Epoch: 12/15 Loss: 3.0016551280021666
Epoch: 12/15 Loss: 3.0090343608856203
Epoch: 12/15 Loss: 3.0549455437660216
Epoch: 12/15 Loss: 3.064172354698181
Epoch: 12/15 Loss: 3.0738224091529847
Epoch: 12/15 Loss: 3.0731438827514648
Epoch: 12/15 Loss: 3.0813179783821107
Epoch: 12/15 Loss: 3.1010269145965577
Epoch: 13/15 Loss: 2.99152966795568
Epoch: 13/15 Loss: 2.8719326167106627
Epoch: 13/15 Loss: 2.925647789478302
Epoch: 13/15 Loss: 2.9353894028663636
Epoch: 13/15 Loss: 2.9563202605247496
Epoch: 13/15 Loss: 2.9768732051849365
Epoch: 13/15 Loss: 2.9817828540802003
Epoch: 13/15 Loss: 2.9893096594810484
Epoch: 13/15 Loss: 3.0023242139816286
Epoch: 13/15 Loss: 3.0445122718811035
Epoch: 13/15 Loss: 3.0298868894577025
Epoch: 13/15 Loss: 3.0447888989448546
Epoch: 13/15 Loss: 3.0516328353881836
Epoch: 14/15 Loss: 2.9437864523061905
Epoch: 14/15 Loss: 2.8412419533729554
Epoch: 14/15 Loss: 2.8936699471473695
Epoch: 14/15 Loss: 2.9030875000953675
Epoch: 14/15 Loss: 2.917694800853729
Epoch: 14/15 Loss: 2.92470934677124
Epoch: 14/15 Loss: 2.9426044716835023
Epoch: 14/15 Loss: 2.9456413397789003
Epoch: 14/15 Loss: 2.9782276697158814
Epoch: 14/15 Loss: 2.986908352851868
Epoch: 14/15 Loss: 3.004481569290161
Epoch: 14/15 Loss: 3.0025596771240233
Epoch: 14/15 Loss: 3.0122240405082703
Epoch: 15/15 Loss: 2.915432571011554
Epoch: 15/15 Loss: 2.8088239612579344
Epoch: 15/15 Loss: 2.849580877780914
Epoch: 15/15 Loss: 2.8489021005630493
Epoch: 15/15 Loss: 2.8945327224731447
Epoch: 15/15 Loss: 2.8934730386734007
Epoch: 15/15 Loss: 2.913402572154999
Epoch: 15/15 Loss: 2.9084716186523436
Epoch: 15/15 Loss: 2.938759408950806
Epoch: 15/15 Loss: 2.9509732837677003
Epoch: 15/15 Loss: 2.9662244000434876
Epoch: 15/15 Loss: 2.9851387214660643
Epoch: 15/15 Loss: 3.0253033781051637
Model Trained and Saved
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** Well, tuning the parameters model took a lot of time in my case. * I tried different `sequences_lengths`, starting very high (like around 120 words) and finally set the parameter in 6 words of length. This final length makes more sense more and makes the training process faster. Nevertheless, after seeing the results, and if I had more computational capabilities and time, I probably test with length a litter higher, for example, 10-15.* I set the number of layers equal to 2. Even when I tested other numbers of layers like 4-5, the results don't improve significative in these situations. For this reason, I keep this parameter in 2.* About the hidden dimension, I initially set it at 256 (taking into account the other models developed in this lesson), and after some try and failure tests, I realized that the model doesn't improve with smaller or higher values.* Other parameters like the `learning rate`, the `number of epochs` and the `embedding dimension` were adjusted in accordance with the previous results that I was obtaining. * In the case of the `embedding dimension`, I started with: $$ \mbox{embedding dimension}=\sqrt[4]{\mbox{vocab_size}} $$ thanks to [this blog](https://developers.googleblog.com/2017/11/introducing-tensorflow-feature-columns.html) of a Google developer. Taking into account the above formula, the `embedding dimension` was around 12 in our case, which seemed a little small for me, and for that reason, I increased to 100, in which case I got better results. > In general, I am sure that this is the first step and that this model can be improved significative. Even when I wish to spend a lot more time in this project, I can't spend the limited GPU resources or my limited time in only one project. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!> For some reason, the original code that save the whole model (see below) don't work in my case (some kind of error reated with CUDA).```pythonhelper.save_model('./save/trained_rnn', trained_rnn)```I changed the syntax to one that I know works perfectly (see below), with the only important remark that it's necessary to initialize the model first because with this option we only save the final parameters of the RNN and not the whole net.```pythontorch.save(trained_rnn.state_dict(), 'rnn_model.pt')```
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
if torch.cuda.is_available():
map_location=lambda storage, loc: storage.cuda()
else:
map_location='cpu'
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
#trained_rnn = helper.load_model('./save/trained_rnn')
# load the model that got the best validation accuracy (uncomment the line below)
trained_rnn=RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.1)
trained_rnn.load_state_dict(torch.load('rnn_model.pt',map_location=map_location))
#trained_rnn =torch.load('./save/trained_rnn', map_location=map_location)
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'kramer' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
kramer: whale' drab jill skiing certainly certainly certainly caper circ conference aaaahh moles sal margarine certainly gathered gathered matata skin midler's lopper locked conference feinerman's submerged hoooot dragnet fiddles risked bowel gathered gathered handicapped skin midler's furrows locked ruin punks cont conference skier midler's crooks limos conference mind's slug midler's 301 midler's drab receipts certainly gathered gathered matata gehrig cont honeymoon ruin skier fiddles groveling celebrities headache certainly skin lopper jimmy: shim fiddles bumps titanic disloyalty virus haircuts certainly gathered gathered fudge ruin ramon: drab bats protector boxing plum 242s certainly ruin punks cont drab corp terminal celebrities plantain popenjays midler's foul *great* midler's fiddles margarine whale' haircuts certainly gathered gathered handicapped vincent: bowel gathered gathered matata gehrig cont conference feinerman's conference [watching fiddles bumps certainly certainly certainly gathered gathered fudge skin jehova's locked ohhh cont ruin punks cont conference skier midler's crooks ruin certainly gathered gathered wesson conference punks certainly conference feinerman's conference selfish 301 hampshire ability procession certainly gathered gathered fudge skin lincoln locked conference comics ned's punks vincent: submerged gasp marine certainly gathered gathered wesson ruin comics punks vincent: swordfish dear whale' ruin mind bowel gathered gathered fudge teacher certainly gathered gathered fudge ruin comics fixing midler's 301 bowel gathered gathered handicapped conference comics punks certainly conference punks cont tabachnick: improve rinsteinbrenner highlight certainly gathered gathered wesson skin midler's furrows locked mentions trilogy gathered gathered matata skin nooope locked ohhh cont ruin fixing midler's corked relaxers cont your ******** bowel gathered gathered fudge conference floppin certainly conference floppin 301 midler's sal margarine certainly skin furrows dear 53 drab dice 'us' cont caper ruin punks cont conference feinerman's conference tortoise stinks swordfish certainly gathered gathered fudge ruin punks cont conference feinerman's spanking honk caper ruin mind cont ray's caper conference mind sane midler's drab *commit* cont redwood fianc&mac226 sane midler's dragnet dawn midler's heighten midler's drab *commit* certainly gathered gathered handicapped skin 'thick locked ruin floppin 301 hampshire ability fiddles farmer's cont caper drab 'medication improve bloomingdale caper conference improve barren *gay* voices jimmy: burgeoning ability limos cont devious conference selfish dragnet dawn midler's heighten fiddles safire certainly conference floppin stinks swordfish certainly gathered gathered fudge mentions cont cream cont ruin punks cont conference feinerman's spanking fiddles bumps tightness 'action certainly gathered gathered fudge ruin punks vincent: bowel ruin plenty moles fiddles oddly kathy certainly gathered gathered matata ohhh trilogy gathered gathered handicapped ohhh cont adoring ruin sushi bowel
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
words = set(text)
vocab_to_int = {word: idx for idx, word in enumerate(words)}
int_to_vocab = {idx: word for word, idx in vocab_to_int.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token_lookup = {".": "||period||",
",": "||comma||",
"\"": "||quotationmark||",
";": "||semicolon||",
"!": "||exclamationmark||",
"?": "||questionmark||",
"(": "||leftparentheses||",
")": "||rightparentheses||",
"-": "||dash||",
"\n": "||return||"}
return token_lookup
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
len_words = len(words)
x = []
y = []
for idx in range(0, len_words):
if idx + sequence_length < len_words:
x.append(words[idx:idx+sequence_length])
y.append(words[idx+sequence_length])
x = np.array(x)
y = np.array(y)
data = TensorDataset(torch.from_numpy(x), torch.from_numpy(y))
data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size,
num_workers=0, shuffle=True)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
words = list(range(20))
data_loader = batch_data(words, 3, 5)
for idx, batch in enumerate(data_loader):
print(batch)
###Output
[tensor([[ 1, 2, 3],
[ 4, 5, 6],
[ 6, 7, 8],
[10, 11, 12],
[ 8, 9, 10]], dtype=torch.int32), tensor([ 4, 7, 9, 13, 11], dtype=torch.int32)]
[tensor([[15, 16, 17],
[16, 17, 18],
[ 2, 3, 4],
[ 9, 10, 11],
[14, 15, 16]], dtype=torch.int32), tensor([18, 19, 5, 12, 17], dtype=torch.int32)]
[tensor([[11, 12, 13],
[ 5, 6, 7],
[ 7, 8, 9],
[ 3, 4, 5],
[13, 14, 15]], dtype=torch.int32), tensor([14, 8, 10, 6, 16], dtype=torch.int32)]
[tensor([[ 0, 1, 2],
[12, 13, 14]], dtype=torch.int32), tensor([ 3, 15], dtype=torch.int32)]
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 6, 7, 8, 9, 10],
[18, 19, 20, 21, 22],
[32, 33, 34, 35, 36],
[ 8, 9, 10, 11, 12],
[ 0, 1, 2, 3, 4],
[ 3, 4, 5, 6, 7],
[39, 40, 41, 42, 43],
[36, 37, 38, 39, 40],
[38, 39, 40, 41, 42],
[10, 11, 12, 13, 14]], dtype=torch.int32)
torch.Size([10])
tensor([11, 23, 37, 13, 5, 8, 44, 41, 43, 15], dtype=torch.int32)
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.vocab_size = vocab_size
self.output_size = output_size
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.n_layers = n_layers
self.dropout = dropout
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,
dropout=dropout, batch_first=True)
self.fc = nn.Linear(hidden_dim, vocab_size)
# self.dropout = nn.Dropout(dropout)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
nn_input = nn_input.long()
embeddings = self.embedding(nn_input)
x, hidden = self.lstm(embeddings, hidden)
# x = self.dropout(x)
x = x.contiguous().view(-1, self.hidden_dim)
x = self.fc(x)
x = x.view(batch_size, -1, self.output_size)
x = x[:,-1]
# x = x[-batch_size:]
# return one batch of output word scores and the hidden state
return x, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
clip = 5
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if train_on_gpu:
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
hidden = tuple([each.data for each in hidden])
optimizer.zero_grad()
output, hidden = rnn(inp, hidden)
loss = criterion(output, target.long())
loss.backward()
nn.utils.clip_grad_norm_(rnn.parameters(), clip)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.data.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 5 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 256
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 5.526069526672363
Epoch: 1/10 Loss: 4.870787148475647
Epoch: 1/10 Loss: 4.663077795505524
Epoch: 1/10 Loss: 4.539566141605377
Epoch: 1/10 Loss: 4.442363487720489
Epoch: 1/10 Loss: 4.401235641479492
Epoch: 1/10 Loss: 4.363713886260986
Epoch: 1/10 Loss: 4.297265470981598
Epoch: 1/10 Loss: 4.293090803146362
Epoch: 1/10 Loss: 4.22792042350769
Epoch: 1/10 Loss: 4.240608395576477
Epoch: 1/10 Loss: 4.2034467282295225
Epoch: 1/10 Loss: 4.1930925283432
Epoch: 2/10 Loss: 4.088544507021751
Epoch: 2/10 Loss: 4.011241126537323
Epoch: 2/10 Loss: 3.9799775519371035
Epoch: 2/10 Loss: 3.9981851835250852
Epoch: 2/10 Loss: 3.9762418441772462
Epoch: 2/10 Loss: 3.9649249267578126
Epoch: 2/10 Loss: 3.956095726490021
Epoch: 2/10 Loss: 3.959707641124725
Epoch: 2/10 Loss: 3.9507065873146057
Epoch: 2/10 Loss: 3.933871732711792
Epoch: 2/10 Loss: 3.9364768233299254
Epoch: 2/10 Loss: 3.96703307390213
Epoch: 2/10 Loss: 3.962228919029236
Epoch: 3/10 Loss: 3.8464172612279808
Epoch: 3/10 Loss: 3.8057811441421507
Epoch: 3/10 Loss: 3.76039284658432
Epoch: 3/10 Loss: 3.811958327293396
Epoch: 3/10 Loss: 3.771002641201019
Epoch: 3/10 Loss: 3.8318602933883668
Epoch: 3/10 Loss: 3.79159245967865
Epoch: 3/10 Loss: 3.788624719619751
Epoch: 3/10 Loss: 3.8020915007591247
Epoch: 3/10 Loss: 3.785884033203125
Epoch: 3/10 Loss: 3.81063366651535
Epoch: 3/10 Loss: 3.81618408870697
Epoch: 3/10 Loss: 3.8047997217178344
Epoch: 4/10 Loss: 3.7339677210316693
Epoch: 4/10 Loss: 3.658859066963196
Epoch: 4/10 Loss: 3.66399866437912
Epoch: 4/10 Loss: 3.6472977447509765
Epoch: 4/10 Loss: 3.6854676232337953
Epoch: 4/10 Loss: 3.6719572806358336
Epoch: 4/10 Loss: 3.6907340931892394
Epoch: 4/10 Loss: 3.693911548137665
Epoch: 4/10 Loss: 3.709907410144806
Epoch: 4/10 Loss: 3.702448383808136
Epoch: 4/10 Loss: 3.6935377502441407
Epoch: 4/10 Loss: 3.714180136680603
Epoch: 4/10 Loss: 3.697839234352112
Epoch: 5/10 Loss: 3.6237794978945863
Epoch: 5/10 Loss: 3.5714504952430723
Epoch: 5/10 Loss: 3.5675440969467163
Epoch: 5/10 Loss: 3.5786201162338256
Epoch: 5/10 Loss: 3.5806940422058107
Epoch: 5/10 Loss: 3.5965871195793153
Epoch: 5/10 Loss: 3.594712172031403
Epoch: 5/10 Loss: 3.6289190282821657
Epoch: 5/10 Loss: 3.6097397589683533
Epoch: 5/10 Loss: 3.639933773994446
Epoch: 5/10 Loss: 3.621704699039459
Epoch: 5/10 Loss: 3.6182908020019533
Epoch: 5/10 Loss: 3.656798906803131
Epoch: 6/10 Loss: 3.5579359482193387
Epoch: 6/10 Loss: 3.499333296298981
Epoch: 6/10 Loss: 3.5079272565841673
Epoch: 6/10 Loss: 3.510929590702057
Epoch: 6/10 Loss: 3.53110399389267
Epoch: 6/10 Loss: 3.532146679878235
Epoch: 6/10 Loss: 3.5323957901000975
Epoch: 6/10 Loss: 3.5531150345802307
Epoch: 6/10 Loss: 3.560317481517792
Epoch: 6/10 Loss: 3.5697117590904237
Epoch: 6/10 Loss: 3.553211599826813
Epoch: 6/10 Loss: 3.5718819780349733
Epoch: 6/10 Loss: 3.5827929368019102
Epoch: 7/10 Loss: 3.517017216628304
Epoch: 7/10 Loss: 3.4506275358200074
Epoch: 7/10 Loss: 3.4464491829872133
Epoch: 7/10 Loss: 3.446525366783142
Epoch: 7/10 Loss: 3.477466076374054
Epoch: 7/10 Loss: 3.469796881198883
Epoch: 7/10 Loss: 3.4630816464424132
Epoch: 7/10 Loss: 3.4917164874076843
Epoch: 7/10 Loss: 3.4870456981658937
Epoch: 7/10 Loss: 3.5029824299812318
Epoch: 7/10 Loss: 3.530848997592926
Epoch: 7/10 Loss: 3.5248370847702026
Epoch: 7/10 Loss: 3.545889711380005
Epoch: 8/10 Loss: 3.4709635061376236
Epoch: 8/10 Loss: 3.3895390477180483
Epoch: 8/10 Loss: 3.3997337040901185
Epoch: 8/10 Loss: 3.4019933161735536
Epoch: 8/10 Loss: 3.415599135875702
Epoch: 8/10 Loss: 3.418054500102997
Epoch: 8/10 Loss: 3.4414546360969545
Epoch: 8/10 Loss: 3.4558645195961
Epoch: 8/10 Loss: 3.4494388461112977
Epoch: 8/10 Loss: 3.4709333066940307
Epoch: 8/10 Loss: 3.4754896659851076
Epoch: 8/10 Loss: 3.4798547253608705
Epoch: 8/10 Loss: 3.5005654253959655
Epoch: 9/10 Loss: 3.414729262038035
Epoch: 9/10 Loss: 3.363495777130127
Epoch: 9/10 Loss: 3.357198349952698
Epoch: 9/10 Loss: 3.3757694115638732
Epoch: 9/10 Loss: 3.392073076248169
Epoch: 9/10 Loss: 3.3958112869262695
Epoch: 9/10 Loss: 3.4049220190048217
Epoch: 9/10 Loss: 3.415756942272186
Epoch: 9/10 Loss: 3.4258674659729005
Epoch: 9/10 Loss: 3.424122892856598
Epoch: 9/10 Loss: 3.4313274941444396
Epoch: 9/10 Loss: 3.444932798862457
Epoch: 9/10 Loss: 3.457956283569336
Epoch: 10/10 Loss: 3.390017016138209
Epoch: 10/10 Loss: 3.318030920982361
Epoch: 10/10 Loss: 3.3126797518730164
Epoch: 10/10 Loss: 3.3411253027915953
Epoch: 10/10 Loss: 3.3589114565849303
Epoch: 10/10 Loss: 3.3840724930763244
Epoch: 10/10 Loss: 3.3634719338417054
Epoch: 10/10 Loss: 3.372021330833435
Epoch: 10/10 Loss: 3.3974899363517763
Epoch: 10/10 Loss: 3.408272335529327
Epoch: 10/10 Loss: 3.3945613594055177
Epoch: 10/10 Loss: 3.4293823318481444
Epoch: 10/10 Loss: 3.426954882144928
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** I did tests with many combinations of parameters, but my network wasn't learning. The loss dropped a little in the first couple of batches and then levelled. I finally found an error in my forward function - I wasn't properly taking the output for the last batch. After I made a fix, I started the learning process again, saw that the network learns, and then I stopped the training process and changed all params to the initial ones which I chose based on intuition gathered in this nanodegree. And after one full training with those params I had a network with a loss smaller than 3.5, so I left it like that. It seems that quite often choosing reasonable params for a network is not very hard - you just follow some rules of thumg like embedding dim between 200 and 300, hidden dim chosen from 18, 256, 512, convolutional layer sizes changing by a factor of two, and so on. Then sometimes you just need to tweak those params a little bit and you're done. Out of curiosity I will experiment with different params in the free time, but for now I'll leave the network as it is right now. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
jerry: sammy.
jerry: i mean, i don't know. but, i was wondering if you don't like the drake?
george: i don't even know where i was in the middle of the building)
jerry: i don't know. i think we should go upstairs.
george: oh, yeah, yeah.
newman:(still trying to keep the kibosh out.
kramer: oh!
elaine: what?
george:(to jerry) what are you doing here?
jerry: i was just joking.
jerry: well, you don't like to take it off.
newman: well you don't like the drake.
jerry: you know, if i could see you. i can't believe you, you should be the first one that i was doing.
elaine: i don't know what this is....
newman: i know...
jerry: so, uh, i think i'll get it...
jerry: oh my god.
elaine:(to george) you know what you think.
jerry:(looking at the door)
kramer:(to the woman) : i can't believe that i am going to get out of your way to get a little bit of the aryan union.
george: i know what i'm gonna do.
george: oh, yeah, well, i'm sorry about this.
elaine: i don't want to talk to her?
elaine: no, no, no, no.
jerry: you don't know what the hell is that?
george: i was in my apartment.
elaine: what do you mean?
elaine: yeah.
jerry: so, what are you doing?
kramer:(to george) what?
kramer: well, it's a little bit.
jerry: well i guess.
elaine: i know.
jerry:(to elaine) oh, i can't get you to go.
jerry: you don't think it
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
Replace data_dir with previously exported Seinfeld_Scripts_cleaned.txt - Using `Seinfeld_Scripts_cleaned.txt` ensures that preprocess_and_save_data will replace the standard preprocess.p with one that has very short and unintelligible entries removed- See [Export Refined Data](dataRefine)
###Code
if os.path.isfile('./data/Seinfeld_Scripts_cleaned.txt'):
data_dir = './data/Seinfeld_Scripts_cleaned.txt'
text = helper.load_data(data_dir)
data_dir
###Output
_____no_output_____
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
device = torch.device("cuda" if train_on_gpu else "cpu")
device
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target``` Adapted from Udacity knowledge base [code](https://knowledge.udacity.com/questions/29798) ~ Survesh C
###Code
import torch
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size, padbatch=False):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# keep only enough words to make full sequences, so adjust to 5 from end
dwords = words[:-sequence_length]
arr_len = len(dwords)
# if the adjusted length is greater than zero then process further, otherwise deal with it later
if arr_len > 0:
feature_tensors = []
target_tensors = []
# iterate through the array, one sequence at a time
for n in range(0, len(dwords)):
feature_tensors.append(words[n:n+sequence_length])
target_tensors.append(words[n+sequence_length])
# convert the arrays to numpy arrays before further processing
feature_tensors = np.array(feature_tensors)
target_tensors = np.array(target_tensors)
# get the number of whole batches
num_batches = len(dwords)//batch_size
# check total array size, for comparison with whole batch size
arr_len = len(feature_tensors)
# arrange for arrays to be padded with zeros to return whole batches...
if (arr_len <= 0) or (arr_len < batch_size and padbatch==False):
# nothing at all to process, just return all zeros - ignore padbatch setting
feature_tensors = np.zeros((batch_size, sequence_length))
target_tensors = np.zeros(batch_size)
elif arr_len < batch_size and padbatch==True:
# the incoming data exists, but is not enough for one batch, pad the batch with zero
feature_tensors = np.pad(feature_tensors, [(0, batch_size - arr_len), (0, 0)], mode='constant')
target_tensors = np.pad(target_tensors, [(0, batch_size - arr_len)], mode='constant')
elif num_batches*batch_size < arr_len and padbatch==True:
# when possible whole batches are removed we have a few record remaining, pad the last batch with zero
feature_tensors = np.pad(feature_tensors, [(0, arr_len - num_batches*batch_size), (0, 0)], mode='constant')
target_tensors = np.pad(target_tensors, [(0, arr_len - num_batches*batch_size)], mode='constant')
else:
# everything is balanced, not strictly required as we have all complete batches already
feature_tensors = feature_tensors[:num_batches*batch_size]
target_tensors = target_tensors[:num_batches*batch_size]
# convert to torch tensors
torch_features = torch.from_numpy(feature_tensors).long()
torch_targets = torch.from_numpy(target_tensors).long()
# create a TensorDataset and then a DataLoader
data = TensorDataset(torch_features, torch_targets)
data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size, shuffle=True)
return data_loader
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`. Test with no padbatch parameter, default is False
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[14, 15, 16, 17, 18],
[16, 17, 18, 19, 20],
[ 7, 8, 9, 10, 11],
[ 4, 5, 6, 7, 8],
[17, 18, 19, 20, 21],
[ 2, 3, 4, 5, 6],
[ 1, 2, 3, 4, 5],
[23, 24, 25, 26, 27],
[ 6, 7, 8, 9, 10],
[26, 27, 28, 29, 30]])
torch.Size([10])
tensor([19, 21, 12, 9, 22, 7, 6, 28, 11, 31])
###Markdown
With padbatch False, the maximum combination is `[39, 40, 41, 42, 43],[44]`
###Code
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[32, 33, 34, 35, 36],
[15, 16, 17, 18, 19],
[ 0, 1, 2, 3, 4],
[ 6, 7, 8, 9, 10],
[34, 35, 36, 37, 38],
[20, 21, 22, 23, 24],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13],
[39, 40, 41, 42, 43],
[33, 34, 35, 36, 37]])
torch.Size([10])
tensor([37, 20, 5, 11, 39, 25, 13, 14, 44, 38])
###Markdown
Test dataloader with padbatch=True- Maximum combination is `[44, 45, 46, 47, 48],[49]` which is the entire range
###Code
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10, padbatch=True)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[42, 43, 44, 45, 46],
[22, 23, 24, 25, 26],
[29, 30, 31, 32, 33],
[20, 21, 22, 23, 24],
[16, 17, 18, 19, 20],
[ 8, 9, 10, 11, 12],
[28, 29, 30, 31, 32],
[14, 15, 16, 17, 18],
[31, 32, 33, 34, 35],
[44, 45, 46, 47, 48]])
torch.Size([10])
tensor([47, 27, 34, 25, 21, 13, 33, 19, 36, 49])
###Markdown
Dataloader with padbatch=True will also load minimal examples...
###Code
test_text = [15,16,17,18,19,20]
t_loader = batch_data(test_text, sequence_length=5, batch_size=10, padbatch=True)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 0, 0, 0, 0],
[15, 16, 17, 18, 19],
[ 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0]])
torch.Size([10])
tensor([ 0, 20, 0, 0, 0, 0, 0, 0, 0, 0])
###Markdown
--- Load word2vec vectors into a weights array if applicable- **That is, if using word2vec but weights array has not been pre-loaded above...**
###Code
import pickle
import numpy as np
if use_word2vec and len(weights_matrix) == 0:
weights_matrix = []
words_found = 0
emb_dim = 300
matrix_len = len(vocab_to_int)
weights_matrix = np.zeros((matrix_len, emb_dim))
for i, word in enumerate(vocab_to_int):
try:
weights_matrix[i] = glove[word]
words_found += 1
except KeyError:
weights_matrix[i] = np.random.normal(scale=0.6, size=(emb_dim, ))
pickle.dump(weights_matrix, open(f'weights_matrix.pkl', 'wb'))
if len(weights_matrix) > 0:
weights_matrix = torch.FloatTensor(weights_matrix)
###Output
_____no_output_____
###Markdown
If `weights_array` has same length as vocab and is a dimension of 300 it can be used...
###Code
use_word2vec, len(vocab_to_int), len(weights_matrix), len(weights_matrix[0])
###Output
_____no_output_____
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
%%time
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5,
droplayer_dropout=0.3,
use_word2vec=True,
rnnType='LSTM'
# , weightDrop=0, tieWeights=False ## placeholder - might experiment with weight dropout, weight tieing
):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.vocab_size = vocab_size
self.output_size = output_size
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.n_layers = n_layers
self.dropout = dropout
self.use_word2vec = use_word2vec
self.rnnType = rnnType
# Set dropout layer to a separate value to passed dropout used by the LSTM
self.droplayer_dropout = droplayer_dropout # self.dropout # 0.3
# define model layers
# Embedding layer - use pre-trained word2vec if use_word2vec is True
# and we've been asked for a 300 dimension embedding
# and the weights matrix fits our vocabulary size
if use_word2vec and self.embedding_dim==300 and len(weights_matrix) == len(vocab_to_int):
self.word_embeddings = nn.Embedding.from_pretrained(weights_matrix)
else:
self.word_embeddings = nn.Embedding(self.vocab_size, self.embedding_dim)
# use an LSTM or GRU
if self.rnnType=='LSTM':
self.rnn = nn.LSTM(self.embedding_dim, self.hidden_dim, self.n_layers,
dropout=self.dropout, batch_first=True)
else:
self.rnn = nn.GRU(self.embedding_dim, self.hidden_dim, self.n_layers,
dropout=self.dropout, batch_first=True)
# Dropout
# self.dropout_layer = nn.Dropout(self.droplayer_dropout)
# Output Linear layer
self.fc = nn.Linear(self.hidden_dim, self.output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
# get embeddings from input
embeddings = self.word_embeddings(nn_input.long())
# Get the outputs and the new hidden state from the lstm/gru
output, hidden = self.rnn(embeddings, hidden)
# Stack up outputs using view
output = output.contiguous().view(-1, self.hidden_dim)
# pass through dropout
# removing this step as it is preventing model from converging...
# output = self.dropout_layer(output)
# push through the fully-connected layer
output = self.fc(output)
# reshape to batch size (first dimension of nn_input), sequence length, output size
output = output.view(batch_size, -1, self.output_size)
# get last batch
out = output[:, -1]
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if self.rnnType=='LSTM':
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
else:
if (train_on_gpu):
hidden = weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda()
else:
hidden = weight.new(self.n_layers, batch_size, self.hidden_dim).zero_()
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
Wall time: 3.57 s
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
# https://github.com/pytorch/examples/blob/master/word_language_model/main.py
# https://discuss.pytorch.org/t/solved-why-we-need-to-detach-variable-which-contains-hidden-representation/1426
# "If we did not truncate the history of hidden states (c, h), the back-propagated gradients would flow from the loss
# towards the beginning, which may result in gradient vanishing or exploding."
# https://github.com/pytorch/pytorch/issues/2198
def repackage_hidden(h):
"""Wraps hidden states in new Tensors, to detach them from their history."""
if isinstance(h, torch.Tensor):
return h.detach()
else:
return tuple(repackage_hidden(v) for v in h)
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if train_on_gpu:
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
# Truncated BPTT
# [h.detach_() for h in hidden]
hidden = repackage_hidden(hidden)
# zeroize gradients
rnn.zero_grad()
# retrieve output of forward pass
output, hidden = rnn(inp, hidden)
# calaculate the loss
loss = criterion(output.squeeze(), target)
# back propogation
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(rnn.parameters(), 10)
# optimizer step
optimizer.step()
# return the loss value and hidden state
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section. Use custom version of this (below) to keep losses and to save intermediate states
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
train_losses = []
best_loss = 5
"""
Modified version of train_rnn that allows collection of train losses and saving of intermediate results
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
# C.Palmer - Added next 2 lines
global best_loss
curr_epoch = 1
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# C.Palmer - Added code block
# Save per epoch
if epoch_i > curr_epoch:
save_model_name = 'rnn_epoch_' + str(curr_epoch)
helper.save_model(f'./save/{save_model_name}', rnn)
curr_epoch = epoch_i
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats, saving best average loss
if batch_i % show_every_n_batches == 0:
avg_loss = np.average(batch_losses)
# C.Palmer - Save average batch losses for graphing
train_losses.append(avg_loss)
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, avg_loss))
batch_losses = []
# C. Palmer - Save model if average loss has improved
if avg_loss < best_loss:
save_model_name = 'rnn_best_loss_' + str(epoch_i)
print('Validation loss decreased ({:.6f} --> {:.6f} Saving model as {}...)'.format(
best_loss, avg_loss, save_model_name))
helper.save_model(f'./save/{save_model_name}', rnn)
best_loss = avg_loss
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 8 # of words in a sequence
# Batch Size
batch_size = 100
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.0005
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int) # already has SPECIAL_WORDS
# Output size
output_size = len(vocab_to_int)
# Embedding Dimension
embedding_dim = 300
# Hidden Dimension
hidden_dim = 300
# Number of RNN Layers
n_layers = 3
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
# C.Palmer - modified parameters loading to optimizer to prevent error
# "optimizing optimizing a parameter that doesn't require gradients"
parameters = filter(lambda p: p.requires_grad, rnn.parameters())
optimizer = torch.optim.Adam(parameters, lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# C.Palmer modified cell as needed to run active_session...
# training the model
with active_session():
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [12.5, 8]
plt.plot(train_losses, label="Training loss")
plt.legend(frameon=False)
plt.show()
###Output
_____no_output_____
###Markdown
We achieved a loss of 3.440298 during the 10th epoch- **Saved as rnn_best_loss_10** Adjust learning rate and train for a further 5 epochs
###Code
trained_rnn = helper.load_model('./save/trained_rnn')
num_epochs = 5
learning_rate = 0.00005
parameters = filter(lambda p: p.requires_grad, trained_rnn.parameters())
optimizer = torch.optim.Adam(parameters, lr=learning_rate)
criterion = nn.CrossEntropyLoss()
with active_session():
trained_rnn_1 = train_rnn(trained_rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
plt.rcParams['figure.figsize'] = [12.5, 8]
plt.plot(train_losses[5:], label="Training loss")
plt.legend(frameon=False)
plt.show()
###Output
_____no_output_____
###Markdown
The best loss, at 3.357648, is saved as rnn_best_loss_5, which is actually from epoch 15...
###Code
os.rename('rnn_best_loss_5.pt', 'rnn_best_loss_15.pt')
###Output
_____no_output_____
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? Answer:Various parameters were extensively tested, using my PC which has a GTX 1080 card. 1. **Sequence length:** I tested up to 20, at which point the model didn't train at all well, and at 15, 10, 8, and 5. The longer sequence lengths sometimes delivered interestingly longer text passages in the script output, but lower settings produced reasonably consistent, somewhat sensible phrases that looked like a conversation. Smaller sequence lengths were favourable to model convergence but needed to be balanced with the usefulness of the output. A sequence length of 8 was optimal, which was evaluated after testing with 10 for some time and gave me just as good output but converged more reliably.2. **Batch size:** This was a limiting factor, if I tried a larger batch size of around 200 then my PC often just reset without warning. After some testing it was found that a size of 100 gave me good enough results without harware crashes.3. **Number of epochs:** I needed around 10 epochs to get down to a 3.5 loss. However, due to the sawtooth pattern on the loss values, where the best loss occurred earlier in an epoch, I needed to train a further 5 epochs with a lower learning rate to get a consistent loss under 3.5.4. **Learning rate:** A setting of 0.005 was best in the first phase of training, but by 10 epochs a lower rate of 0.00005 managed to push the model to below a 3.5 loss more efficiently, if I left the rate at 0.0005 then it would not get to the better loss.5. **Vocabulary size:** This was set to the length of the vocabulary by measuring the vocab_to_int array. Note that this already includes the additional SPECIAL_WORDS.6. **Output size:** I set this to the same as the vocabulary size, but this was after seeking advice and I am not sure that this is optimal. Do we really need to have the same output length as our input or would we benefit from trimming it?7. **Embedding Dimension:** After experimenting with 400 and 300, I switched to using pre-trained GloVe.6B.300d embeddings to initialise my model, so settled on 300 to align with that. I found that using the pre-trained embeddings made a considerable improvement to the output.8. **Hidden Dimension:** I experimented a lot with this, but found that at 300 I had a manageable training, going higher seemed to result in a fragility in my environment like increasing batch size did, as well as being slower and more difficult to converge.9. **Number of layers:** I chose 3 as that was the recommended number in any literature I consulted around this kind of task.10. **Show stats every n batches:** I picked 500 as that was a good interval for testing, reporting and saving average losses. Further points:- The [GloVe](https://nlp.stanford.edu/pubs/glove.pdf) glove.6B.300d word-embeddings were useful, the model began to converge more quickly and the output seemed more coherent.- I needed to alter some of the "DON'T MODIFY" cells to work with this, for instance to use active_session, to save intermediate results and an array of training losses, and to test the use of some of my model parameters. - My model can be configured as a GRU, which is suggested as an option by the notebook, but tests.test_rnn would not pass a model configured as a GRU - it complained about the hidden state size being incorrect - it expects init_hidden to return two components (hidden and cell state) but there can only be one component for a GRU as it cannot utilize cell state. e.g. "`AssertionError: Wrong hidden state size. Expected type (2, 50, 10). Got type torch.Size([50, 10])`". However, I did evaluate the GRU but found it didn't converge any more efficiently than the LSTM or yield a better output. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Or, load checkpoint with best loss....
###Code
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/rnn_best_loss_15')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
import warnings
warnings.simplefilter('ignore') # "error", "ignore", "always", "default", "module", or "once"
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
# save script to a text file
f = open("generated_script_jerry.txt","w")
f.write(generated_script)
f.close()
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'elaine' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
# save script to a text file
f = open("generated_script_elaine.txt","w")
f.write(generated_script)
f.close()
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'kramer' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
# save script to a text file
f = open("generated_script_kramer.txt","w")
f.write(generated_script)
f.close()
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'george' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
# save script to a text file
f = open("generated_script_george.txt","w")
f.write(generated_script)
f.close()
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'ronnie' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:76: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
My favourite line: "(george enters and takes a bite of the sink)" Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
The TV Script is Not PerfectIt's ok if the TV script doesn't make perfect sense. It should look like alternating lines of dialogue, here is one such example of a few generated lines. Example generated script>jerry: what about me?>>jerry: i don't have to wait.>>kramer:(to the sales table)>>elaine:(to jerry) hey, look at this, i'm a good doctor.>>newman:(to elaine) you think i have no idea of this...>>elaine: oh, you better take the phone, and he was a little nervous.>>kramer:(to the phone) hey, hey, jerry, i don't want to be a little bit.(to kramer and jerry) you can't.>>jerry: oh, yeah. i don't even know, i know.>>jerry:(to the phone) oh, i know.>>kramer:(laughing) you know...(to jerry) you don't know.You can see that there are multiple characters that say (somewhat) complete sentences, but it doesn't have to be perfect! It takes quite a while to get good results, and often, you'll have to use a smaller vocabulary (and discard uncommon words), or get more data. The Seinfeld dataset is about 3.4 MB, which is big enough for our purposes; for script generation you'll want more than 1 MB of text, generally. Submitting This ProjectWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_tv_script_generation.ipynb" and save another copy as an HTML file by clicking "File" -> "Download as.."->"html". Include the "helper.py" and "problem_unittests.py" files in your submission. Once you download these files, compress them into one zip file for submission. --- Extra Procedures Explore data in more depth - Check data ranges, and eliminate unuseful words - requires dict/wordsEn.txt and dict/words.txt- looking for lines where the text is very brief or not recognizably English
###Code
english_words = {}
with open("dict/wordsEn.txt") as word_file:
english_words = set(word.strip().lower() for word in word_file if len(word) > 0)
with open("dict/words.txt") as word_file:
english_words = set(word.strip().lower() for word in word_file if len(word) > 0)
english_words.add("o.k.")
def is_english_word(word):
return 1 if word.lower() in english_words else 0
import pandas as pd
import re as re
import operator
from collections import Counter
def uniqueratio(wcount, uwcount):
if uwcount == 0:
retval = 1.
else:
retval = wcount / uwcount
return retval
def textDF (text):
# Data and target as a data frame, with word count attributes
DF = pd.DataFrame(text, columns = ["Text"])
DF['Character'] = DF.Text.apply(lambda x: x[: x.find(':')])
DF['Stripped'] = DF.Text.apply(lambda x: re.sub (r'([^a-zA-Z\s]+?)', '', x))
DF['DataLength'] = DF.Stripped.apply(len)
DF['WordCount'] = DF.Stripped.apply(lambda x: re.split("[ \[\]\\n\@\-\"!?:,.()<>]+", x)).apply(len)
DF['EnglishPerc'] = DF.Stripped.apply(lambda x: [is_english_word(w) for w in x]).apply(sum) / DF.WordCount
DF['WordsCounts'] = DF.Stripped.apply(lambda x: Counter(filter(None, re.split("[ \[\]\\n\@\-\"!?:,.()<>]+", x))))
DF['UniqueWords'] = DF['WordsCounts'].apply(len)
DF['UniqueRatio'] = DF.apply(lambda x: uniqueratio(x['WordCount'], x['UniqueWords']), axis =1)
## DF['NoPunctuation'] = DF.Text.apply(lambda x: ''.join(e for e in x if e.isalnum()))
DF['DataIndex'] = DF.index
DF = DF[['Text', 'Character', 'DataLength', 'WordCount', 'UniqueWords',
'Stripped', 'UniqueRatio', 'EnglishPerc', 'DataIndex', 'WordsCounts' ]]
return DF
textSplit = text.split("\n\n")
Text_df = textDF(textSplit)
pd.set_option('display.max_colwidth', 80)
Text_df.head(5)
###Output
_____no_output_____
###Markdown
Get unique names of Seinfield characters and feed into english words dict, then re-run dataframe...- Wanting to identify rows with not much recognized English, but need to ensure the majority of the characters in the show are counted as English words.
###Code
charactersSet = set(list(Text_df.Character))
characters_df = pd.DataFrame(list(charactersSet), columns = ["Character"])
characters_df[characters_df.Character.str.len() < 30][:10]
###Output
_____no_output_____
###Markdown
Just use a name if its longer than zero and <= than 20
###Code
for character in characters_df.values:
charac = character[0]
bracketpos = charac.find('(')
if bracketpos < 0:
bracketpos = charac.find('[')
if bracketpos >= 0:
charac = charac[:bracketpos]
if (len(charac) > 0) & (len(charac) <= 20):
print(charac)
english_words.add(charac)
english_words.add(charac+":")
###Output
_____no_output_____
###Markdown
**Sample output:**```greg kramer & georgeboyfriendworkers jimsashakramer man in showerjerry & tiaronnie jerry kramer helen & mortyfred spikepatold man 3winona tough guy mailmanelaine jerry's penismr tanakaopening monologbuilding ckramer juliocherylwendytall girlelaine ```
###Code
is_english_word("subway announcement")
###Output
_____no_output_____
###Markdown
Re-create dataframe after adding character names to the english words dictionary...
###Code
Text_df = textDF(textSplit)
pd.set_option('display.max_colwidth', -1)
Text_df.loc[:20, :"Stripped"]
###Output
_____no_output_____
###Markdown
Sentences with a low percentage of plain English words or low word count may be discarded...
###Code
Text_df[(Text_df.EnglishPerc < 1.66) | (Text_df.WordCount < 3) | (Text_df.UniqueWords < 2)].loc[:, :"EnglishPerc"]
###Output
_____no_output_____
###Markdown
Preserve the row indexes as a series so they can be identified and removed later...
###Code
badRows = pd.Series(Text_df[(Text_df.EnglishPerc < 1.66) | (Text_df.WordCount < 3) | (Text_df.UniqueWords < 2)].index)
badRows[:5]
Text_df[Text_df.index.isin(badRows)].loc[485:491, :"EnglishPerc"]
Text_df[~Text_df.index.isin(badRows)].loc[485:491, :"EnglishPerc"]
###Output
_____no_output_____
###Markdown
Passages with a high word count are more than OK, and can be appreciated!
###Code
pd.set_option('display.max_colwidth', -1)
Text_df[Text_df.WordCount > 250].loc[:, :"UniqueWords"]
pd.set_option('display.max_colwidth', 80)
DataLengths = Text_df.DataLength
DataLengths.mean(), DataLengths.std(), DataLengths.min(), DataLengths.max()
DataLengths.hist()
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
fig = plt.figure(figsize=(15,5))
ax1 = fig.add_subplot(1, 2, 1)
ax2 = fig.add_subplot(1, 2, 2)
ax1.hist(DataLengths[DataLengths <= 500]);
ax1.set_xlabel('Document length under 500')
ax1.set_ylabel('No of Documents')
ax2.hist(DataLengths[DataLengths > 500]);
ax2.set_xlabel('Document length over 500')
ax2.set_ylabel('No of Documents');
WordCounts = Text_df.WordCount
WordCounts.mean(), WordCounts.std(), WordCounts.min(), WordCounts.max()
fig = plt.figure(figsize=(15,5))
ax1 = fig.add_subplot(1, 2, 1)
ax2 = fig.add_subplot(1, 2, 2)
ax1.hist(WordCounts[WordCounts <= 50]);
ax1.set_xlabel('Word counts under 50')
ax1.set_ylabel('No of Documents')
ax2.hist(WordCounts[WordCounts > 50]);
ax2.set_xlabel('Word counts over 50')
ax2.set_ylabel('No of Documents');
pd.set_option('display.max_colwidth', 80)
WordCountsClean = Text_df[~Text_df.index.isin(badRows)].WordCount
WordCountsClean.mean(), WordCountsClean.std(), WordCountsClean.min(), WordCountsClean.max()
###Output
_____no_output_____
###Markdown
Export as an alternative refined text file without the very short entries
###Code
import os
output_file = os.path.join('./data/Seinfeld_Scripts_cleaned.txt')
with open(output_file, 'w') as f:
for txt in Text_df[~Text_df.index.isin(badRows)]["Text"].tolist():
f.write(txt + '\n\n')
###Output
_____no_output_____
###Markdown
[Continue to word2vec section...](word2vec) [Continue to pre-process and save section...](preprocess_save) Renumber notebook cells
###Code
%%javascript
// Sourced from http://nbviewer.jupyter.org/gist/minrk/5d0946d39d511d9e0b5a
$("#renumber-button").parent().remove();
function renumber() {
// renumber cells in order
var i=1;
IPython.notebook.get_cells().map(function (cell) {
if (cell.cell_type == 'code') {
// set the input prompt
cell.set_input_prompt(i);
// set the output prompt (in two places)
cell.output_area.outputs.map(function (output) {
if (output.output_type == 'execute_result') {
output.execution_count = i;
cell.element.find(".output_prompt").text('Out[' + i + ']:');
}
});
i += 1;
}
});
}
IPython.toolbar.add_buttons_group([{
'label' : 'Renumber',
'icon' : 'fa-list-ol',
'callback': renumber,
'id' : 'renumber-button'
}]);
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
#define count var
countVar = Counter(text)
#define vocab var
Vocab = sorted(countVar, key=countVar.get, reverse=True)
#define integer to vocab
int_to_vocab = {ii: word for ii, word in enumerate(Vocab)}
#define vocab to integer
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
tokens = dict()
tokens['.'] = '||period||'
tokens[','] = '||comma||'
tokens['"'] = '||quotation_mark||'
tokens[';'] = '||semicolon||'
tokens['!'] = '||exclam_mark||'
tokens['?'] = '||question_mark||'
tokens['('] = '||left_par||'
tokens[')'] = '||right_par||'
tokens['-'] = '||dash||'
tokens['\n'] = '||return||'
return tokens
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
Num_batches = len(words)//batch_size
words = words[:Num_batches*batch_size]
x, y = [], []
for idx in range(0, len(words) - sequence_length):
x.append(words[idx:idx+sequence_length])
y.append(words[idx+sequence_length])
feature_tensors, target_tensors = torch.from_numpy(np.asarray(x)), torch.from_numpy(np.asarray(y))
dataset = TensorDataset(feature_tensors, target_tensors)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size)
# return a dataloader
return dataloader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]])
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
out = self.fc(lstm_out)
# reshape
out = out.view(batch_size, -1, self.output_size)
# find the last batch
output = out[:, -1]
# return one batch of output word scores and the hidden state
return output, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if (train_on_gpu):
inp = inp.cuda()
target = target.cuda()
# perform backpropagation and optimization
hidden = tuple([each.data for each in hidden])
rnn.zero_grad()
output, hidden = rnn(inp, hidden)
loss = criterion(output, target)
loss.backward()
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 12 # of words in a sequence
# Batch Size
batch_size = 120
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = len(vocab_to_int)
# Embedding Dimension
embedding_dim = 300
# Hidden Dimension
hidden_dim = int(300*1.25)
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 5.531735227108002
Epoch: 1/10 Loss: 4.930219256401062
Epoch: 1/10 Loss: 4.637746600151062
Epoch: 1/10 Loss: 4.442910384654999
Epoch: 1/10 Loss: 4.507025374412537
Epoch: 1/10 Loss: 4.437873532295227
Epoch: 1/10 Loss: 4.499480187416077
Epoch: 1/10 Loss: 4.360066707611084
Epoch: 1/10 Loss: 4.224316485881806
Epoch: 1/10 Loss: 4.262131739616394
Epoch: 1/10 Loss: 4.250785901069641
Epoch: 1/10 Loss: 4.350858811378479
Epoch: 1/10 Loss: 4.325847270488739
Epoch: 1/10 Loss: 4.353909639358521
Epoch: 2/10 Loss: 4.122715645664375
Epoch: 2/10 Loss: 3.9956280279159544
Epoch: 2/10 Loss: 3.8856767058372497
Epoch: 2/10 Loss: 3.764372691631317
Epoch: 2/10 Loss: 3.86486045217514
Epoch: 2/10 Loss: 3.8547475366592407
Epoch: 2/10 Loss: 3.9458832178115846
Epoch: 2/10 Loss: 3.8158533701896666
Epoch: 2/10 Loss: 3.730561679840088
Epoch: 2/10 Loss: 3.7631667466163634
Epoch: 2/10 Loss: 3.792806112766266
Epoch: 2/10 Loss: 3.901373929977417
Epoch: 2/10 Loss: 3.8545562386512757
Epoch: 2/10 Loss: 3.8934123978614807
Epoch: 3/10 Loss: 3.757648332837055
Epoch: 3/10 Loss: 3.715178225517273
Epoch: 3/10 Loss: 3.630136202812195
Epoch: 3/10 Loss: 3.5242831320762633
Epoch: 3/10 Loss: 3.625628924369812
Epoch: 3/10 Loss: 3.6309379606246948
Epoch: 3/10 Loss: 3.7150929403305053
Epoch: 3/10 Loss: 3.5617436628341674
Epoch: 3/10 Loss: 3.5180994987487795
Epoch: 3/10 Loss: 3.524529664516449
Epoch: 3/10 Loss: 3.5592564339637756
Epoch: 3/10 Loss: 3.682766806125641
Epoch: 3/10 Loss: 3.63344953250885
Epoch: 3/10 Loss: 3.6614061670303344
Epoch: 4/10 Loss: 3.569586784254444
Epoch: 4/10 Loss: 3.5327431926727293
Epoch: 4/10 Loss: 3.4778979787826536
Epoch: 4/10 Loss: 3.3810313334465025
Epoch: 4/10 Loss: 3.4490697503089907
Epoch: 4/10 Loss: 3.4713255314826967
Epoch: 4/10 Loss: 3.540807016849518
Epoch: 4/10 Loss: 3.395219274520874
Epoch: 4/10 Loss: 3.3618284215927123
Epoch: 4/10 Loss: 3.380989155292511
Epoch: 4/10 Loss: 3.3962616963386534
Epoch: 4/10 Loss: 3.5119336886405943
Epoch: 4/10 Loss: 3.5053564672470094
Epoch: 4/10 Loss: 3.52675777053833
Epoch: 5/10 Loss: 3.444213536519074
Epoch: 5/10 Loss: 3.4076444568634034
Epoch: 5/10 Loss: 3.3569597172737122
Epoch: 5/10 Loss: 3.264707137107849
Epoch: 5/10 Loss: 3.325695327758789
Epoch: 5/10 Loss: 3.3434394330978394
Epoch: 5/10 Loss: 3.4178655323982237
Epoch: 5/10 Loss: 3.292145290374756
Epoch: 5/10 Loss: 3.25900110912323
Epoch: 5/10 Loss: 3.282059187412262
Epoch: 5/10 Loss: 3.286310025691986
Epoch: 5/10 Loss: 3.371444211959839
Epoch: 5/10 Loss: 3.3803050670623778
Epoch: 5/10 Loss: 3.4059303545951845
Epoch: 6/10 Loss: 3.3510205145178626
Epoch: 6/10 Loss: 3.3233260822296145
Epoch: 6/10 Loss: 3.262583809375763
Epoch: 6/10 Loss: 3.1777939085960387
Epoch: 6/10 Loss: 3.2358165702819823
Epoch: 6/10 Loss: 3.2441793150901796
Epoch: 6/10 Loss: 3.323215190887451
Epoch: 6/10 Loss: 3.2096805644035338
Epoch: 6/10 Loss: 3.1801719818115233
Epoch: 6/10 Loss: 3.198467743396759
Epoch: 6/10 Loss: 3.1996535511016844
Epoch: 6/10 Loss: 3.2810081453323363
Epoch: 6/10 Loss: 3.292669029712677
Epoch: 6/10 Loss: 3.3275886268615724
Epoch: 7/10 Loss: 3.2740323561436364
Epoch: 7/10 Loss: 3.2473365926742552
Epoch: 7/10 Loss: 3.189321361064911
Epoch: 7/10 Loss: 3.1124250736236574
Epoch: 7/10 Loss: 3.1598252415657044
Epoch: 7/10 Loss: 3.174737638950348
Epoch: 7/10 Loss: 3.2507713837623595
Epoch: 7/10 Loss: 3.133176600456238
Epoch: 7/10 Loss: 3.1098085503578186
Epoch: 7/10 Loss: 3.1263022136688234
Epoch: 7/10 Loss: 3.1329917140007018
Epoch: 7/10 Loss: 3.2054256014823914
Epoch: 7/10 Loss: 3.2255016083717347
Epoch: 7/10 Loss: 3.249888722896576
Epoch: 8/10 Loss: 3.2023202670221034
Epoch: 8/10 Loss: 3.1807839002609253
Epoch: 8/10 Loss: 3.132005618095398
Epoch: 8/10 Loss: 3.0564675722122194
Epoch: 8/10 Loss: 3.101025879383087
Epoch: 8/10 Loss: 3.1125088901519775
Epoch: 8/10 Loss: 3.191727280139923
Epoch: 8/10 Loss: 3.073734776496887
Epoch: 8/10 Loss: 3.0565507707595825
Epoch: 8/10 Loss: 3.068301407337189
Epoch: 8/10 Loss: 3.0812396683692933
Epoch: 8/10 Loss: 3.148022204875946
Epoch: 8/10 Loss: 3.1773056478500368
Epoch: 8/10 Loss: 3.1913066611289977
Epoch: 9/10 Loss: 3.1471167175460604
Epoch: 9/10 Loss: 3.133128029823303
Epoch: 9/10 Loss: 3.085259078979492
Epoch: 9/10 Loss: 3.009995768547058
Epoch: 9/10 Loss: 3.0498082242012026
Epoch: 9/10 Loss: 3.0640956010818483
Epoch: 9/10 Loss: 3.144371497631073
Epoch: 9/10 Loss: 3.022254427909851
Epoch: 9/10 Loss: 3.0071170454025267
Epoch: 9/10 Loss: 3.0216007103919984
Epoch: 9/10 Loss: 3.0384934406280517
Epoch: 9/10 Loss: 3.1074284529685974
Epoch: 9/10 Loss: 3.1239990234375
Epoch: 9/10 Loss: 3.136796194553375
Epoch: 10/10 Loss: 3.0981430233099836
Epoch: 10/10 Loss: 3.0887152523994446
Epoch: 10/10 Loss: 3.0389931325912474
Epoch: 10/10 Loss: 2.9724698853492737
Epoch: 10/10 Loss: 3.003671471595764
Epoch: 10/10 Loss: 3.021963978290558
Epoch: 10/10 Loss: 3.099330397605896
Epoch: 10/10 Loss: 2.9838244442939756
Epoch: 10/10 Loss: 2.9682252025604248
Epoch: 10/10 Loss: 2.9791079874038697
Epoch: 10/10 Loss: 2.999862591743469
Epoch: 10/10 Loss: 3.0582768750190734
Epoch: 10/10 Loss: 3.0778299646377563
Epoch: 10/10 Loss: 3.0915690803527833
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** I trained the model with the following parameters:10 epochslearning rate = 0.001embedding dim = 300hidden dim = 375number of layers = 2show_every_n_batches = 2500and it gave a good loss: 2.96 --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:40: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_frequency = Counter(text)
sorted_vocab = sorted(word_frequency, key = word_frequency.get, reverse = True)
int_to_vocab = {i : word for i, word in enumerate(sorted_vocab)}
vocab_to_int = {word : i for i, word in int_to_vocab.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
pun_dic = {
'.': '||period||',
',': '||comma||',
'"': '||quotation_mark||',
';': '||semicolon||',
'!': '||exclamation_mark||',
'?': '||question_mark||',
'(': '||left_parentheses||',
')': '||right_Parentheses||',
'-': '||dash||',
'\n': '||return||'
}
return pun_dic
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
batch_num = len(words)//batch_size
batch_words = words[: (batch_num * batch_size)]
feature, target = [],[]
target_len = len(batch_words[:-sequence_length])
for i in range(0, target_len):
feature.append(batch_words[i: i + sequence_length])
target.append(batch_words[i + sequence_length])
target_tensors = torch.from_numpy(np.array(target))
feature_tensors = torch.from_numpy(np.array(feature))
data = TensorDataset(feature_tensors, target_tensors)
data_loader = torch.utils.data.DataLoader(data, batch_size = batch_size, shuffle = True)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 28, 29, 30, 31, 32],
[ 12, 13, 14, 15, 16],
[ 5, 6, 7, 8, 9],
[ 10, 11, 12, 13, 14],
[ 13, 14, 15, 16, 17],
[ 7, 8, 9, 10, 11],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 22, 23, 24, 25, 26],
[ 14, 15, 16, 17, 18]])
torch.Size([10])
tensor([ 33, 17, 10, 15, 18, 12, 6, 7, 27, 19])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.hidden_dim = hidden_dim
self.n_layers = n_layers
self.output_size = output_size
self.vocab_size = vocab_size
self.embedding_dim = embedding_dim
self.dropout = nn.Dropout(dropout)
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout = dropout, batch_first = True)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
nn_input = nn_input.long()
embed_out = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embed_out, hidden)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
lstm_out = self.dropout(lstm_out)
lstm_out = self.fc(lstm_out)
lstm_out = lstm_out.view(batch_size, -1, self.output_size)
lstm_output = lstm_out[:, -1]
# return one batch of output word scores and the hidden state
return lstm_output, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if train_on_gpu:
hidden = (weight.new(self.n_layers, batch_size , self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size , self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size , self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size , self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
# perform backpropagation and optimization
if (train_on_gpu):
inp, target = inp.cuda(), target.cuda()
hidden = tuple([i.data for i in hidden])
rnn.zero_grad()
out, hidden = rnn(inp, hidden)
loss = criterion(out, target)
loss.backward()
clip = 5
nn.utils.clip_grad_norm_(rnn.parameters(), clip)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 8 # of words in a sequence
# Batch Size
batch_size = 512
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 30
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = len(vocab_to_int)
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 250
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 30 epoch(s)...
Epoch: 1/30 Loss: 5.437440775871277
Epoch: 1/30 Loss: 4.793800668716431
Epoch: 1/30 Loss: 4.5818943510055545
Epoch: 2/30 Loss: 4.41057398734305
Epoch: 2/30 Loss: 4.323312338352204
Epoch: 2/30 Loss: 4.2895473055839535
Epoch: 3/30 Loss: 4.195126593032508
Epoch: 3/30 Loss: 4.1544194674491886
Epoch: 3/30 Loss: 4.133786525249481
Epoch: 4/30 Loss: 4.08080295544726
Epoch: 4/30 Loss: 4.044547868251801
Epoch: 4/30 Loss: 4.041615394592285
Epoch: 5/30 Loss: 3.993426749580785
Epoch: 5/30 Loss: 3.973333731174469
Epoch: 5/30 Loss: 3.968936930179596
Epoch: 6/30 Loss: 3.9191343990253857
Epoch: 6/30 Loss: 3.914088735580444
Epoch: 6/30 Loss: 3.9148674035072326
Epoch: 7/30 Loss: 3.8595362539716094
Epoch: 7/30 Loss: 3.861041332244873
Epoch: 7/30 Loss: 3.8518181381225585
Epoch: 8/30 Loss: 3.8197524389918196
Epoch: 8/30 Loss: 3.8067559876441956
Epoch: 8/30 Loss: 3.8154332323074343
Epoch: 9/30 Loss: 3.7852803235433683
Epoch: 9/30 Loss: 3.775405547618866
Epoch: 9/30 Loss: 3.774232336997986
Epoch: 10/30 Loss: 3.747182048766719
Epoch: 10/30 Loss: 3.737147322654724
Epoch: 10/30 Loss: 3.757960272312164
Epoch: 11/30 Loss: 3.7103773662757615
Epoch: 11/30 Loss: 3.712995080947876
Epoch: 11/30 Loss: 3.718786338806152
Epoch: 12/30 Loss: 3.6907887938212447
Epoch: 12/30 Loss: 3.6858580284118654
Epoch: 12/30 Loss: 3.701261080741882
Epoch: 13/30 Loss: 3.660750239162471
Epoch: 13/30 Loss: 3.6622575674057005
Epoch: 13/30 Loss: 3.6846286358833313
Epoch: 14/30 Loss: 3.643589364050532
Epoch: 14/30 Loss: 3.6504557995796203
Epoch: 14/30 Loss: 3.6525452489852905
Epoch: 15/30 Loss: 3.616130003240588
Epoch: 15/30 Loss: 3.6248851132392885
Epoch: 15/30 Loss: 3.632484414577484
Epoch: 16/30 Loss: 3.6021579985033
Epoch: 16/30 Loss: 3.6072781252861024
Epoch: 16/30 Loss: 3.6240361161231993
Epoch: 17/30 Loss: 3.5787458966779
Epoch: 17/30 Loss: 3.590819798946381
Epoch: 17/30 Loss: 3.6005235805511475
Epoch: 18/30 Loss: 3.5700792046854533
Epoch: 18/30 Loss: 3.5753119678497316
Epoch: 18/30 Loss: 3.5854785614013673
Epoch: 19/30 Loss: 3.5604437407855243
Epoch: 19/30 Loss: 3.5600452198982238
Epoch: 19/30 Loss: 3.5653090324401857
Epoch: 20/30 Loss: 3.54416574028983
Epoch: 20/30 Loss: 3.5412900671958925
Epoch: 20/30 Loss: 3.5544599776268004
Epoch: 21/30 Loss: 3.531428624743875
Epoch: 21/30 Loss: 3.5389969487190247
Epoch: 21/30 Loss: 3.5493617205619814
Epoch: 22/30 Loss: 3.5167748783281456
Epoch: 22/30 Loss: 3.5114588212966917
Epoch: 22/30 Loss: 3.5327849850654602
Epoch: 23/30 Loss: 3.5080896492107354
Epoch: 23/30 Loss: 3.4989126200675966
Epoch: 23/30 Loss: 3.5212397809028624
Epoch: 24/30 Loss: 3.4915077535086154
Epoch: 24/30 Loss: 3.498850947856903
Epoch: 24/30 Loss: 3.508957397937775
Epoch: 25/30 Loss: 3.481622135108299
Epoch: 25/30 Loss: 3.4850300221443176
Epoch: 25/30 Loss: 3.502882728099823
Epoch: 26/30 Loss: 3.47805429438026
Epoch: 26/30 Loss: 3.4771142077445982
Epoch: 26/30 Loss: 3.4903489270210266
Epoch: 27/30 Loss: 3.463992622378062
Epoch: 27/30 Loss: 3.469563786029816
Epoch: 27/30 Loss: 3.475732074737549
Epoch: 28/30 Loss: 3.454071216094188
Epoch: 28/30 Loss: 3.45453217458725
Epoch: 28/30 Loss: 3.4747745909690857
Epoch: 29/30 Loss: 3.4471531953567114
Epoch: 29/30 Loss: 3.4408276104927062
Epoch: 29/30 Loss: 3.465437880039215
Epoch: 30/30 Loss: 3.435562095178766
Epoch: 30/30 Loss: 3.446127962112427
Epoch: 30/30 Loss: 3.445101849079132
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** 1. For sequence length i used 4,8,16,32. Among them i found 8 quite suitable one for me. 2. The ideal number for number of hidden layers 1-3 and 2 should be enough to detect complex features so i used 2. 3. I tried different batch sizes with 2^n values and among them 512 seemed okay to me. 4. The lower the value of hidden_dim_value the slower the training process is to converge but also put risk to lead to imprecise model. 5. After several experiment with learning rate 0.001 helped me to reach target. 6. For NLP models I saw 200-300 can be a good number of value for embeddings dimension with unique words around 10000-15000. So I started with the 200 and it worked. :D --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:42: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
counts = Counter(text)
vocab = sorted(counts,key = counts.get,reverse = True)
vocab_to_int = {word: ii for ii,word in enumerate(vocab,1)}
int_to_vocab = {ii: word for ii,word in enumerate(vocab,1)}
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
diction = {}
diction['.'] = '||period||'
diction[','] = '||comma||'
diction['\"'] = '||quotation_mark||'
diction[';'] = '||semicolon||'
diction['!'] = '||exclamation_mark||'
diction['?'] = '||question_mark||'
diction['('] = '||left_parentheses||'
diction[')'] = '||right_parentheses||'
diction['-'] = '||dash||'
diction['\n'] = '||return||'
return diction
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
target_tensors = []
feature_tensors = []
for i in range(len(words) - sequence_length):
feature_tensors.append(words[i:i+sequence_length])
target_tensors.append(words[i+sequence_length])
# return a dataloader
feature_tensors = torch.tensor(feature_tensors)
target_tensors = torch.tensor(target_tensors)
data = TensorDataset(feature_tensors, target_tensors)
data_loader = DataLoader(data,batch_size=batch_size)
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]])
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define model layers
self.embed = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
self.fc = nn.Linear(hidden_dim, output_size)
# linear and sigmoid layers
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
nn_input = nn_input.long()
embeds = self.embed(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
out = self.fc(lstm_out)
out = out.view(batch_size, -1, self.output_size)
# return one batch of output word scores and the hidden state
return out[:, -1], hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if (train_on_gpu):
# rnn = rnn.cuda()
inp,target = inp.cuda(),target.cuda()
# perform backpropagation and optimization
hidden = tuple([each.data for each in hidden])
out , hidden = rnn(inp,hidden)
loss = criterion(out,target.long())
optimizer.zero_grad()
loss.backward()
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 20 # of words in a sequence
# Batch Size
batch_size = 256
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 20
# Learning Rate
learning_rate = 0.0009
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = len(vocab_to_int)
# Embedding Dimension
embedding_dim = 500
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 3
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn = rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 20 epoch(s)...
Epoch: 1/20 Loss: 5.887243369102478
Epoch: 1/20 Loss: 5.2746011934280395
Epoch: 1/20 Loss: 4.863280238628388
Epoch: 1/20 Loss: 4.633268164157867
Epoch: 1/20 Loss: 4.4605137720108035
Epoch: 1/20 Loss: 4.509580154418945
Epoch: 2/20 Loss: 4.348491438278338
Epoch: 2/20 Loss: 4.101797121524811
Epoch: 2/20 Loss: 4.152081150054932
Epoch: 2/20 Loss: 4.059343485832215
Epoch: 2/20 Loss: 4.005234693527222
Epoch: 2/20 Loss: 4.105318975448609
Epoch: 3/20 Loss: 4.020878631894181
Epoch: 3/20 Loss: 3.85475603055954
Epoch: 3/20 Loss: 3.92665558719635
Epoch: 3/20 Loss: 3.8608516602516176
Epoch: 3/20 Loss: 3.80488276052475
Epoch: 3/20 Loss: 3.904042951107025
Epoch: 4/20 Loss: 3.8269558061913744
Epoch: 4/20 Loss: 3.681981324672699
Epoch: 4/20 Loss: 3.7687328085899354
Epoch: 4/20 Loss: 3.7199491024017335
Epoch: 4/20 Loss: 3.668596122264862
Epoch: 4/20 Loss: 3.7591842193603515
Epoch: 5/20 Loss: 3.6851532502872186
Epoch: 5/20 Loss: 3.558307955741882
Epoch: 5/20 Loss: 3.6447519097328187
Epoch: 5/20 Loss: 3.6118097095489503
Epoch: 5/20 Loss: 3.556797326564789
Epoch: 5/20 Loss: 3.6510276527404786
Epoch: 6/20 Loss: 3.578385325224419
Epoch: 6/20 Loss: 3.4700098037719727
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
_____no_output_____
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
import numpy as np
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from string import punctuation
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
print(text[:10])
#clean_text = text.lower().translate(str.maketrans('','', punctuation))
#unique_words = list(set(word for word in clean_text.split()))
unique_words = list(set(word for word in text ))
vocab_to_int = dict(zip(unique_words, range(len(unique_words))))
int_to_vocab = {v:k for k,v in vocab_to_int.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
['moe_szyslak', "moe's", 'tavern', 'where', 'the', 'elite', 'meet', 'to', 'drink', 'bart_simpson']
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
import unicodedata
import re
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
delimiter = '||{}||'
puncs = ['.', ',', '"', ';', '!', '?', '(', ')', '-']
punc_to_token = {k: delimiter.format(
re.sub('\s|-', '_', unicodedata.name(k) )) for k in puncs }
punc_to_token['\n'] = delimiter.format('Return')
return punc_to_token
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
token_lookup()
###Output
_____no_output_____
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
['this', 'is', 'out', '||full_stop||', '||full_stop||', '||full_stop||', 'and', 'out', 'is', 'one']
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
assert len(int_to_vocab) == len(vocab_to_int)
len(int_to_vocab)
len(np.unique(int_text))
missings = set(int_to_vocab.keys()) - set(int_text)
missings
a, = missings
int_to_vocab[a]
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
tot_batches = len(words) // (batch_size)
words = words[:tot_batches * batch_size]
# TODO: Implement function
X = [words[i:i+sequence_length] for i in range(len(words) - sequence_length)]
y = words[sequence_length: ]
X = torch.Tensor(X).long()
y = torch.Tensor(y).long()
dataset = TensorDataset(X, y)
loader = DataLoader(dataset, batch_size=batch_size, shuffle=True)
# return a dataloader
return loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
assert len(sample_x) == len(sample_y)
###Output
_____no_output_____
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
self.hidden_dim = hidden_dim
self.embedding_dim = embedding_dim
self.output_size = output_size
self.n_layers = n_layers
super(RNN, self).__init__()
self.EMBED = nn.Embedding(vocab_size, embedding_dim)
self.LSTM = nn.LSTM(embedding_dim, hidden_dim,
n_layers, batch_first=True, dropout=dropout)
self.FC = nn.Sequential(
nn.Dropout(0.25),
nn.Linear(hidden_dim, output_size)
)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = len(nn_input)
try:
out = self.EMBED(nn_input)
out, hidden = self.LSTM(out, hidden)
out = out.contiguous().view(-1, self.hidden_dim)
out = self.FC(out)
out = out.view(batch_size, -1, self.output_size)
out = out[:, -1]
# return one batch of output word scores and the hidden state
except RuntimeError as e:
print(nn_input)
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
a = torch.zeros(self.n_layers, batch_size, self.hidden_dim)
b = torch.zeros(self.n_layers, batch_size, self.hidden_dim)
if(train_on_gpu):
a = a.cuda()
b = b.cuda()
# initialize hidden state with zero weights, and move to GPU if available
return (a,b)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
import os
os.environ['CUDA_LAUNCH_BLOCKING'] = '1'
!export CUDA_LAUNCH_BLOCKING=1
###Output
_____no_output_____
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
rnn.train()
if(train_on_gpu):
inp = inp.cuda()
target = target.cuda()
rnn.zero_grad()
optimizer.zero_grad()
hidden = tuple([t.clone().detach() for t in hidden])
out, hidden = rnn(inp, hidden)
loss = criterion(out.squeeze(), target)
loss_val = loss.item()
loss.backward()
nn.utils.clip_grad_norm_(rnn.parameters(), 4) #avoid gradient explosion
optimizer.step()
optimizer.zero_grad()
# perform backpropagation and optimization
# return the loss over a batch and the hidden state produced by our model
return loss_val, hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
l = list(map(lambda x: len(x.split()), lines))
np.mean(l), np.max(l), np.min(l)
import matplotlib.pyplot as plt
plt.hist(l)
plt.hist(l, bins=range(0,100, 10))
plt.hist(l, bins=range(0, 40, 2))
# Data params
# Sequence Length
sequence_length = 12 # of words in a sequence
# Batch Size
batch_size = 248
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 6
# Learning Rate
learning_rate = 1e-3
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 2**8
# Hidden Dimension
hidden_dim = 2**9
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 2000
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn. Previously I reached loss around ~3.7. I went on a again to reach below 3.5
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
#rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 6 epoch(s)...
Epoch: 1/6 Loss: 3.602133973479271
Epoch: 2/6 Loss: 3.5725604769120096
Epoch: 3/6 Loss: 3.478540582107888
Epoch: 4/6 Loss: 3.408708731974325
Epoch: 5/6 Loss: 3.353948412867091
Epoch: 6/6 Loss: 3.3071689980313828
Model Trained and Saved
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** - Yes, I tried different sequence lengths. The average number of words per line was around 5.5, I tried sequence lengths starting from 5 and up to 40. I noticed improvement in both performance and loss score when the seq_len is around 10-15. - for `hidden_dim` and `n_layers` I used trial-and-error for different values, I had to between balance performance with training score. I noticed that increasing the `hidden_dim` beyond 300 decreases performance with no significant training improvment, so 256 was an ideal value. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
jerry: i think i can do it.
george: i know.(he leaves)
george:(to jerry) i can't get it back.
george: what? what is it?
newman: i don't understand why we have to be in business.
morty:(to jerry) hey, hey, hey.
jerry: hey, hey.
george: hey! hey, hey, hey, what is that, hickory?
kramer: i don't think so.
newman: well, i think i may have done this.
george: i know what i do, i don't know. i think i should.
elaine: oh...(george exits)
kramer: well, you know what? i mean...
elaine: oh, yeah.
jerry: yeah.
kramer: yeah, yeah.
jerry: i know what you're doing.
george: i know. i'm not getting a cab.
jerry: i know.
george: what are you doing?
george: i don't know. i mean, i think i would really like to have to do that.
jerry: i can't believe i have to say it.
kramer: well, i don't know, i know i have to do that voice.
jerry: oh, yeah.
elaine: oh yeah, sure... i don't know.
george:(to jerry) you know i was thinking i was just wondering what i did, i don't want to go to the hospital to get some sleep and get out of here, right?
jerry: yeah yeah, i got some very interested.(kramer nods and leaves)
elaine:(to kramer) hey, i got it, i got a great entrance for a few weeks, but i think we should see a lot of money.
george:(to the phone) hey, what do you think, you want me to do that.
elaine: well,
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
_____no_output_____
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
# return tuple
return (None, None)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
_____no_output_____
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
_____no_output_____
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# return a dataloader
return None
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
_____no_output_____
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
# define model layers
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# return one batch of output word scores and the hidden state
return None, None
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
_____no_output_____
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
# perform backpropagation and optimization
# return the loss over a batch and the hidden state produced by our model
return None, None
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
_____no_output_____
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = # of words in a sequence
# Batch Size
batch_size =
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs =
# Learning Rate
learning_rate =
# Model parameters
# Vocab size
vocab_size =
# Output size
output_size =
# Embedding Dimension
embedding_dim =
# Hidden Dimension
hidden_dim =
# Number of RNN Layers
n_layers =
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
_____no_output_____
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
_____no_output_____
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_counts = Counter(text)
# sorting the words from most to least frequent in text occurrence
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
# create int_to_vocab dictionaries
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return {"\n":"||NEW_LINE||",
".":"||PERIOD||",
",":"||COMMA||",
"\"":"||QUOTATION_MARK||",
";":"||SEMICOLON||",
"!":"||EXCLAMATION_MARK||",
"?":"||QUESTION_MARK||",
"(":"||LEFT_PAREN||",
")":"||RIGHT_PAREN||",
"-":"||HYPHEN||"}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
import numpy as np
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
num_windows = len(words)-sequence_length
feature_tensors = np.zeros((num_windows, sequence_length), int)
target_tensors = np.zeros(num_windows, int)
for i in range(num_windows):
feature_tensors[i] = words[i:(i+sequence_length)]
target_tensors[i] = words[(i+sequence_length)]
data = TensorDataset(torch.from_numpy(feature_tensors), torch.from_numpy(target_tensors))
data_loader = DataLoader(data, batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
#batch_data([1,2,3,4,5,6,7,8,9,10],5,10)
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]])
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define model layers
# embedding and LSTM layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
# dropout layer
#### self.dropout = nn.Dropout(0.3)
# linear and sigmoid layers
self.fc = nn.Linear(hidden_dim, output_size)
#### self.sig = nn.Sigmoid()
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
#print("batch_size=50, sequence_length=3, vocab_size=20, output_size=20, embedding_dim=15, hidden_dim=10, n_layers=2")
# embeddings and lstm_out
#### nn_input = nn_input.long()
#print("nn_input: " + str(nn_input.shape))
embeds = self.embedding(nn_input)
#print("embeds: " + str(embeds.shape))
lstm_out, hidden = self.lstm(embeds, hidden)
#print("lstm_out_initial: " + str(lstm_out.shape))
#print("hidden: " + str(hidden[0].shape))
#### lstm_out = lstm_out[:, -1, :] # getting the last time step output
#print("lstm_out_only_last: " + str(lstm_out.shape))
# dropout and fully-connected layer
#### lstm_out = self.dropout(lstm_out)
# Stack up LSTM outputs using view
out = lstm_out.contiguous().view(-1, self.hidden_dim)
#print("lstm_out_resize: " + str(out.shape))
out = self.fc(out)
#print("fc_output: " + str(out.shape))
# sigmoid function
#### sig_out = self.sig(out)
# reshape into (batch_size, seq_length, output_size)
out = out.view(nn_input.shape[0], -1, self.output_size)
# get last batch
out = out[:, -1]
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# Create two new tensors with sizes n_layers x batch_size x hidden_dim,
# initialized to zero, for hidden state and cell state of LSTM
weight = next(self.parameters()).data
# initialize hidden state with zero weights, and move to GPU if available
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inputs, targets, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inputs: A batch of input to the neural network
:param targets: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
#print("batch_size=200, input_size=20, output_size=10, sequence_length=3, embedding_dim=15, hidden_dim=10, n_layers=2, learning_rate=0.01")
# move data to GPU, if available
if(train_on_gpu):
inputs, targets = inputs.cuda(), targets.cuda()
# perform backpropagation and optimization
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
hidden = tuple([each.data for each in hidden])
# zero accumulated gradients
rnn.zero_grad()
# get the output from the model
#print("inputs: " + str(inputs.shape))
#print("hidden: " + str(hidden[0].shape))
output, hidden = rnn(inputs, hidden)
#print("output: " + str(output.shape))
#print("targets: " + str(targets.shape))
# calculate the loss and perform backprop
loss = criterion(output.squeeze(), targets.long())
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 50
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 12
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = len(vocab_to_int)
# Embedding Dimension
embedding_dim = 400
# Hidden Dimension
hidden_dim = 512
# Number of RNN Layers
n_layers = 3
# Show stats for every n number of batches
show_every_n_batches = 1500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 12 epoch(s)...
Epoch: 1/12 Loss: 5.382296454429627
Epoch: 1/12 Loss: 4.766961904843648
Epoch: 1/12 Loss: 4.51070561726888
Epoch: 1/12 Loss: 4.55942829322815
Epoch: 1/12 Loss: 4.536936652501424
Epoch: 1/12 Loss: 4.5027955611546835
Epoch: 1/12 Loss: 4.364610230763753
Epoch: 1/12 Loss: 4.378511615912119
Epoch: 1/12 Loss: 4.358441351890564
Epoch: 1/12 Loss: 4.508118573506673
Epoch: 1/12 Loss: 4.482987241586049
Epoch: 2/12 Loss: 4.308184862891861
Epoch: 2/12 Loss: 4.1413512289524075
Epoch: 2/12 Loss: 3.993586084206899
Epoch: 2/12 Loss: 4.103677097002665
Epoch: 2/12 Loss: 4.141811655521392
Epoch: 2/12 Loss: 4.163006536165873
Epoch: 2/12 Loss: 4.028586396932602
Epoch: 2/12 Loss: 4.063893524646759
Epoch: 2/12 Loss: 4.074651515960693
Epoch: 2/12 Loss: 4.225006390889486
Epoch: 2/12 Loss: 4.199901998837789
Epoch: 3/12 Loss: 4.096897686266043
Epoch: 3/12 Loss: 3.9598137588500975
Epoch: 3/12 Loss: 3.833541934967041
Epoch: 3/12 Loss: 3.9427824546496075
Epoch: 3/12 Loss: 3.976020507176717
Epoch: 3/12 Loss: 3.981061183929443
Epoch: 3/12 Loss: 3.8797401956717175
Epoch: 3/12 Loss: 3.9101320581436156
Epoch: 3/12 Loss: 3.923528670152028
Epoch: 3/12 Loss: 4.087516144116719
Epoch: 3/12 Loss: 4.051521681149801
Epoch: 4/12 Loss: 3.9634128051438355
Epoch: 4/12 Loss: 3.83914767964681
Epoch: 4/12 Loss: 3.716419871489207
Epoch: 4/12 Loss: 3.80715305407842
Epoch: 4/12 Loss: 3.8360372813542685
Epoch: 4/12 Loss: 3.8625567728678387
Epoch: 4/12 Loss: 3.772729421536128
Epoch: 4/12 Loss: 3.768461557865143
Epoch: 4/12 Loss: 3.802092656294505
Epoch: 4/12 Loss: 3.964041507403056
Epoch: 4/12 Loss: 3.9088448918660483
Epoch: 5/12 Loss: 3.842247732125228
Epoch: 5/12 Loss: 3.726089081128438
Epoch: 5/12 Loss: 3.6267248492240904
Epoch: 5/12 Loss: 3.694848281065623
Epoch: 5/12 Loss: 3.7287139987945555
Epoch: 5/12 Loss: 3.7605069545110066
Epoch: 5/12 Loss: 3.6608082218170166
Epoch: 5/12 Loss: 3.665735421339671
Epoch: 5/12 Loss: 3.7008822974363964
Epoch: 5/12 Loss: 3.8427698403994244
Epoch: 5/12 Loss: 3.8082571652730306
Epoch: 6/12 Loss: 3.746616207194278
Epoch: 6/12 Loss: 3.638549703280131
Epoch: 6/12 Loss: 3.545618828932444
Epoch: 6/12 Loss: 3.6178971621195477
Epoch: 6/12 Loss: 3.6432237571875254
Epoch: 6/12 Loss: 3.669344506899516
Epoch: 6/12 Loss: 3.585422807216644
Epoch: 6/12 Loss: 3.583431126674016
Epoch: 6/12 Loss: 3.606181358019511
Epoch: 6/12 Loss: 3.7494511752128603
Epoch: 6/12 Loss: 3.720742782354355
Epoch: 7/12 Loss: 3.6634636787274286
Epoch: 7/12 Loss: 3.5725717662970227
Epoch: 7/12 Loss: 3.483094269911448
Epoch: 7/12 Loss: 3.546909927209218
Epoch: 7/12 Loss: 3.5807827563285826
Epoch: 7/12 Loss: 3.602562974770864
Epoch: 7/12 Loss: 3.5161950318813324
Epoch: 7/12 Loss: 3.518695900917053
Epoch: 7/12 Loss: 3.54108612259229
Epoch: 7/12 Loss: 3.6807830793062846
Epoch: 7/12 Loss: 3.6526482915083567
Epoch: 8/12 Loss: 3.6007220522775185
Epoch: 8/12 Loss: 3.511210502068202
Epoch: 8/12 Loss: 3.432588779290517
Epoch: 8/12 Loss: 3.487544007619222
Epoch: 8/12 Loss: 3.5258040126959482
Epoch: 8/12 Loss: 3.5514919748306273
Epoch: 8/12 Loss: 3.4677621422608693
Epoch: 8/12 Loss: 3.4702265093326568
Epoch: 8/12 Loss: 3.4821366413434345
Epoch: 8/12 Loss: 3.63070587793986
Epoch: 8/12 Loss: 3.589202776114146
Epoch: 9/12 Loss: 3.548762606156703
Epoch: 9/12 Loss: 3.4814633708000184
Epoch: 9/12 Loss: 3.387579452912013
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** Basically it was lot of trial and error. Initially, I went with sequence length of 200. Later I realized that sequence length is the expected length of sentences, which from the statistics is around 5.5. So I reduced to 10.Batch size is 50. I tried to make it 100 and more, but got OOM. So I stopped changing that.No of epochs was initially 4-5. And the loss never reached below 4. Increasing it to 9 got the loss down to 3.5. I also tried to increase it to 12, just to see how much can it reduce below 3.5, but it timed out.Learning rate, I experimented with .001, .005 and .01For embedding dimension I chose 400 = 1/100 of vocabulary/input size (~46K)For hidden dimension and layer size, I chose a high number. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:51: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_counts = Counter(text)
sorted_vocab = sorted (word_counts, key=word_counts.get, reverse = True)
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
punctuation_dict ={
'.':'Period',
',':'Comma',
'"':'Quotation_Mark',
';':'Semicolon',
'!':'Exclamation_Mark',
'?':'Question_Mark',
'(':'Left_paren',
')':'Right_paren',
'-':'Hyphens',
'\n':'Return'
}
return punctuation_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
n_batches = len(words)//batch_size
words = words[:n_batches*batch_size]
y_length = len(words) - sequence_length
x, y = [], []
for idx in range(0, y_length):
idx_end = sequence_length + idx
x_batch = words[idx:idx_end]
x.append(x_batch)
y_batch = words[idx_end]
y.append(y_batch)
# create Tensor datasets
data = TensorDataset(torch.from_numpy(np.asarray(x)), torch.from_numpy(np.asarray(y)))
# make sure the SHUFFLE your training data
dataloader = DataLoader(data, shuffle=True, batch_size=batch_size)
# return a dataloader
return dataloader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 3, 4, 5, 6, 7],
[ 36, 37, 38, 39, 40],
[ 28, 29, 30, 31, 32],
[ 19, 20, 21, 22, 23],
[ 33, 34, 35, 36, 37],
[ 41, 42, 43, 44, 45],
[ 39, 40, 41, 42, 43],
[ 20, 21, 22, 23, 24],
[ 17, 18, 19, 20, 21],
[ 38, 39, 40, 41, 42]])
torch.Size([10])
tensor([ 8, 41, 33, 24, 38, 46, 44, 25, 22, 43])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# embedding and LSTM layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,
dropout=dropout, batch_first=True)
# linear layer
self.fc = nn.Linear(hidden_dim, output_size)
#dropout
self.dropout = nn.Dropout(dropout)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
# embeddings and lstm_out
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
#out = self.dropout(lstm_out)
fc_out = self.fc(lstm_out)
# reshape to be batch_size first
fc_out = fc_out.view(batch_size, -1, self.output_size)
fc_out = fc_out[:, -1] # get last batch of labels
# return last sigmoid output and hidden state
return fc_out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
# Create two new tensors with sizes n_layers x batch_size x hidden_dim,
# initialized to zero, for hidden state and cell state of LSTM
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if (train_on_gpu):
rnn.cuda()
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
h = tuple([each.data for each in hidden])
# zero accumulated gradients
rnn.zero_grad()
if (train_on_gpu):
inputs, target = inp.cuda(), target.cuda()
# get the output from the model
output, h = rnn(inputs, h)
# calculate the loss and perform backprop
loss = criterion(output, target)
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(),h
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 15
# Learning Rate
learning_rate = 0.0005
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)+1
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 300
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 2000
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 15 epoch(s)...
Epoch: 1/15 Loss: 4.995299962639809
Epoch: 1/15 Loss: 4.421565550327301
Epoch: 1/15 Loss: 4.258525164246559
Epoch: 2/15 Loss: 4.076182387588481
Epoch: 2/15 Loss: 3.973931924700737
Epoch: 2/15 Loss: 3.9418499126434328
Epoch: 3/15 Loss: 3.8258294144248706
Epoch: 3/15 Loss: 3.768056872487068
Epoch: 3/15 Loss: 3.7785115662813187
Epoch: 4/15 Loss: 3.671331323221366
Epoch: 4/15 Loss: 3.643831925034523
Epoch: 4/15 Loss: 3.659439428925514
Epoch: 5/15 Loss: 3.561268246318452
Epoch: 5/15 Loss: 3.535648198723793
Epoch: 5/15 Loss: 3.561878904104233
Epoch: 6/15 Loss: 3.475887758953552
Epoch: 6/15 Loss: 3.4558649756908415
Epoch: 6/15 Loss: 3.486779133558273
Epoch: 7/15 Loss: 3.4104048094016846
Epoch: 7/15 Loss: 3.386608862876892
Epoch: 7/15 Loss: 3.416811353683472
Epoch: 8/15 Loss: 3.346717089816245
Epoch: 8/15 Loss: 3.3424735304117204
Epoch: 8/15 Loss: 3.3563212617635725
Epoch: 9/15 Loss: 3.302091039335631
Epoch: 9/15 Loss: 3.284151251077652
Epoch: 9/15 Loss: 3.3161020565032957
Epoch: 10/15 Loss: 3.2542494463952725
Epoch: 10/15 Loss: 3.2349173226356505
Epoch: 10/15 Loss: 3.281742657184601
Epoch: 11/15 Loss: 3.2124535504859093
Epoch: 11/15 Loss: 3.2041785420179365
Epoch: 11/15 Loss: 3.246911931872368
Epoch: 12/15 Loss: 3.173727209436283
Epoch: 12/15 Loss: 3.180486132860184
Epoch: 12/15 Loss: 3.213321935415268
Epoch: 13/15 Loss: 3.1409198117224033
Epoch: 13/15 Loss: 3.1389528231620787
Epoch: 13/15 Loss: 3.188457790851593
Epoch: 14/15 Loss: 3.11691429713023
Epoch: 14/15 Loss: 3.115593685388565
Epoch: 14/15 Loss: 3.1549170449972155
Epoch: 15/15 Loss: 3.09170830627336
Epoch: 15/15 Loss: 3.0923308324813843
Epoch: 15/15 Loss: 3.1297739617824556
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here)Sequence length: I've tried to increase the sequence length up to 50, and found that 10 is the best to get the lowest training loss.Batch size: According to the hyperparameters lesson, acceptable batch size is from 32 to 128, where the higher the batch size the more complex the model will be. After changing between 64 and 128 batch size, I have found that 128 will give much lower training loss.Learning rate: Tried to experiment from 0.01 to 0.0001 learning rate. Found that 0.0005 is the best. Embedding dimension: I've experimented from 100 to 1000 embedding dimension and found that 200 is the best value for that.Hidden dimension: Experimented from 200 to 600 and chose 300. Got lower loss compared to others with fewer processing time.N_layers: According to the lesson, 3 is the best number of layers, but found that 2 is acceptable too as can get lower loss comparable to 3.Dropout layer after LSTM layer: Unfortunately adding the dropout layer making the loss progress slower than without dropout layer. After 10 epochs, the loss was still at 3.75, then I tried to exclude the dropout layer, and the loss progress rapidly and achieve 3.56 after the 5th epoch. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:47: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
vocab_to_int = {word: ii for ii, word in enumerate(set(text))}
int_to_vocab = {ii: word for ii, word in enumerate(vocab_to_int.keys())}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
punc_dict = {}
punc_dict['.'] = "||Period||"
punc_dict[','] = "||Comma||"
punc_dict['"'] = "||Quotation_Mark||"
punc_dict[';'] = "||Semicolon||"
punc_dict['!'] = "||Exclamation_Mark||"
punc_dict['?'] = "||Question_Mark||"
punc_dict['('] = "||Left_Parentheses||"
punc_dict[')'] = "||Right_Parentheses||"
punc_dict['-'] = "||Dash||"
punc_dict['\n'] = "||Return||"
return punc_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# declaring feature & target tensors
feature= []
target = []
# iterating over the words to make batches
for ii in range(len(words)):
# batching till we reach the last word in text for target tensor
# eg: words[n] will be the target tensor for words[n-sequence_length], ... ,words[n-1] where n is length of words
if ii + sequence_length < len(words) :
feature.append(words[ii:ii+sequence_length])
target.append(words[ii+sequence_length])
# creating tensor from numpy arrays
feature_tensor = torch.from_numpy(np.array(feature))
target_tensor = torch.from_numpy(np.array(target))
# creating a dataloader
data = TensorDataset(feature_tensor, target_tensor)
dataloader = DataLoader(data, shuffle=True, batch_size=batch_size)
# return a dataloader
return dataloader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[15, 16, 17, 18, 19],
[23, 24, 25, 26, 27],
[28, 29, 30, 31, 32],
[18, 19, 20, 21, 22],
[22, 23, 24, 25, 26],
[ 4, 5, 6, 7, 8],
[30, 31, 32, 33, 34],
[39, 40, 41, 42, 43],
[37, 38, 39, 40, 41],
[40, 41, 42, 43, 44]])
torch.Size([10])
tensor([20, 28, 33, 23, 27, 9, 35, 44, 42, 45])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.hidden_dim = hidden_dim
self.output_size = output_size
self.n_layers = n_layers
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, batch_first=True, dropout=dropout)
self.fc = nn.Linear(hidden_dim, output_size)
self.sigmoid = nn.Sigmoid()
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# storing the batch_size
batch_size = nn_input.size(0)
# looking up embedding values
embeds = self.embedding(nn_input)
# passing the input to LSTM cells
lstm_out, hidden = self.lstm(embeds, hidden)
# flattening the output of lstm to feed the fully connected layer
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# passing the LSTM output to fully-connected layer
output = self.fc(lstm_out)
# output = self.sigmoid(output)
#print("hidden_dim: {}, output_size: {}, batch_size: {}".format(self.hidden_dim, self.output_size, batch_size))
#print(output.shape)
# reshaping the output to batch_size
output = output.view(batch_size, -1, self.output_size)
#print(output.shape)
# storing the last batch of words in output
out = output[:, -1]
#print(out.shape)
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if train_on_gpu :
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if train_on_gpu:
rnn = rnn.cuda()
inp = inp.cuda()
target = target.cuda()
# perform forward propagation
# creating new variables for hidden state, so that we don't back propagate
# through entire history of training
hidden = tuple([each.data for each in hidden])
# clearing the gradients
optimizer.zero_grad()
# passing the data through the model
preds, hidden = rnn(inp, hidden)
# print("preds shape: {}".format(preds.shape))
# print("target shape: {}".format(target.shape))
# perform backpropagation and optimization
# calculating the loss
loss = criterion(preds, target)
loss.backward()
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_rnn` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 20 # of words in a sequence
# Batch Size
batch_size = 200
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 8
# Learning Rate
learning_rate = 0.0005
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 256
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 8 epoch(s)...
Epoch: 1/8 Loss: 5.632480495452881
Epoch: 1/8 Loss: 4.942170736312867
Epoch: 1/8 Loss: 4.716282566070556
Epoch: 1/8 Loss: 4.56660439825058
Epoch: 1/8 Loss: 4.469807868003845
Epoch: 1/8 Loss: 4.3838198928833005
Epoch: 1/8 Loss: 4.348790532588959
Epoch: 1/8 Loss: 4.28408527469635
Epoch: 2/8 Loss: 4.177651564528545
Epoch: 2/8 Loss: 4.081936648368836
Epoch: 2/8 Loss: 4.079673293113708
Epoch: 2/8 Loss: 4.068662234306336
Epoch: 2/8 Loss: 4.04325536775589
Epoch: 2/8 Loss: 4.009149151325226
Epoch: 2/8 Loss: 4.020483733177185
Epoch: 2/8 Loss: 3.9962571992874145
Epoch: 3/8 Loss: 3.9034119735161465
Epoch: 3/8 Loss: 3.86343590927124
Epoch: 3/8 Loss: 3.8537089614868165
Epoch: 3/8 Loss: 3.8412228307724
Epoch: 3/8 Loss: 3.846558000087738
Epoch: 3/8 Loss: 3.8416546840667722
Epoch: 3/8 Loss: 3.8378375749588014
Epoch: 3/8 Loss: 3.831935047149658
Epoch: 4/8 Loss: 3.7613330808778604
Epoch: 4/8 Loss: 3.6869794187545777
Epoch: 4/8 Loss: 3.7026803255081178
Epoch: 4/8 Loss: 3.7180724806785586
Epoch: 4/8 Loss: 3.7059155254364016
Epoch: 4/8 Loss: 3.724037839412689
Epoch: 4/8 Loss: 3.7190098152160647
Epoch: 4/8 Loss: 3.7236519036293028
Epoch: 5/8 Loss: 3.647038556387027
Epoch: 5/8 Loss: 3.6019871573448183
Epoch: 5/8 Loss: 3.6007642459869387
Epoch: 5/8 Loss: 3.6029805097579954
Epoch: 5/8 Loss: 3.622204779148102
Epoch: 5/8 Loss: 3.6282464880943297
Epoch: 5/8 Loss: 3.6074222950935364
Epoch: 5/8 Loss: 3.6257841272354128
Epoch: 6/8 Loss: 3.5581482090055943
Epoch: 6/8 Loss: 3.513393308639526
Epoch: 6/8 Loss: 3.513257884025574
Epoch: 6/8 Loss: 3.5227911357879638
Epoch: 6/8 Loss: 3.523511145591736
Epoch: 6/8 Loss: 3.5454648828506468
Epoch: 6/8 Loss: 3.5306159801483155
Epoch: 6/8 Loss: 3.5454433569908144
Epoch: 7/8 Loss: 3.490649246176084
Epoch: 7/8 Loss: 3.4405820550918578
Epoch: 7/8 Loss: 3.4513167791366577
Epoch: 7/8 Loss: 3.460797918796539
Epoch: 7/8 Loss: 3.468031313419342
Epoch: 7/8 Loss: 3.4589882307052613
Epoch: 7/8 Loss: 3.465713171958923
Epoch: 7/8 Loss: 3.4684483218193054
Epoch: 8/8 Loss: 3.4220041977862516
Epoch: 8/8 Loss: 3.3742277727127075
Epoch: 8/8 Loss: 3.3936182470321654
Epoch: 8/8 Loss: 3.3995863003730773
Epoch: 8/8 Loss: 3.3969829425811766
Epoch: 8/8 Loss: 3.3975289940834044
Epoch: 8/8 Loss: 3.3959989790916443
Epoch: 8/8 Loss: 3.4389727234840395
Model Trained and Saved
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** * I tried different sequence lengths and my observations was that a sequence length around 20 was good and it made the model converge faster as compared to a sequence length of 70-80 words.* With n_layers = 2 the model was able to learn more as compared to n_layers = 1.* First I tried with hidden_dim = 128 and the loss was decreasing but upto a certain point and in my second attempt I used hidden_dim = 256 and it helped me to take the loss beyond 3.5 --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
jerry: *any* deal's eyeliner's deal's deal's rented rented rented rented deal's deal's rented rented rented //www rented.
george: oh, well i- i can't do it.
jerry: i don't think i can do it.
jerry: i know, i- i don't know.
jerry: yeah, but it's just a great idea.(to george) i mean, i think we have to go to the bathroom.
jerry:(to kramer) i don't know how you want it.
jerry: what happened?
jerry: yeah.
kramer: oh.......
elaine: yeah.
jerry:(trying to hear it to george) hey, hey, i got a lot of money for you, but, you got the whole deal for you and i can be able to get the whole thing on your face.
george: well i don't think it's the only thing i can do, but, you can do it for a while.
elaine: what? what?
jerry: i can't believe you were doing that.
jerry: i know, i just want to know what i mean, but i was thinking about it. i mean, i don't think so, you know, i know.
jerry: i thought you were a woman?
elaine: well, it's a lot of a situation. it's a little bit of a semi.
jerry:(to jerry) hey, i don't want to talk to you!
jerry: well i don't want a lot of people in the shower and then i was a couple of times and i got to tell you what.. i mean... you think i'm going to go out?
jerry: no, i'm not...
jerry: yeah.
george:(to jerry) you know what i think? you don't know how you are.
george: yeah, yeah.
elaine: you think i can be able to be in the bathroom?
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
The TV Script is Not PerfectIt's ok if the TV script doesn't make perfect sense. It should look like alternating lines of dialogue, here is one such example of a few generated lines. Example generated script>jerry: what about me?>>jerry: i don't have to wait.>>kramer:(to the sales table)>>elaine:(to jerry) hey, look at this, i'm a good doctor.>>newman:(to elaine) you think i have no idea of this...>>elaine: oh, you better take the phone, and he was a little nervous.>>kramer:(to the phone) hey, hey, jerry, i don't want to be a little bit.(to kramer and jerry) you can't.>>jerry: oh, yeah. i don't even know, i know.>>jerry:(to the phone) oh, i know.>>kramer:(laughing) you know...(to jerry) you don't know.You can see that there are multiple characters that say (somewhat) complete sentences, but it doesn't have to be perfect! It takes quite a while to get good results, and often, you'll have to use a smaller vocabulary (and discard uncommon words), or get more data. The Seinfeld dataset is about 3.4 MB, which is big enough for our purposes; for script generation you'll want more than 1 MB of text, generally. Submitting This ProjectWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_tv_script_generation.ipynb" and save another copy as an HTML file by clicking "File" -> "Download as.."->"html". Include the "helper.py" and "problem_unittests.py" files in your submission. Once you download these files, compress them into one zip file for submission.
###Code
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
len(text)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
cnt = Counter(text)
vocab = sorted(cnt, key=cnt.get, reverse=True)
vocab_to_int = { word:idx for idx, word in enumerate(vocab) }
int_to_vocab = { idx:word for idx, word in enumerate(vocab) }
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
test_text = ['no', 'the', 'one', 'is', 'the','the', 'no']
# test_cnt = Counter(test_text)
# print(test_cnt)
# d = sorted(test_cnt, key=test_cnt.get, reverse=True)
# print(d)
# for i, word in enumerate(d):
# print(i, word)
test_vocab_to_int, test_int_to_vocab = create_lookup_tables(test_text)
print(test_vocab_to_int)
###Output
{'the': 0, 'no': 1, 'one': 2, 'is': 3}
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
ret = {
'.': '||Period||',
',': '||Comma||',
'"': '||Quotation||',
';': '||Semicolon||',
'!': '||Exclamation||',
'?': '||Question||',
'(': '||Left_Parentheses||',
')': '||Right_Parentheses||',
'-': '||Dash||',
'\n': '||Return||'
}
return ret
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
int_text[:200]
vocab_to_int
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
x = [];
y = [];
# TODO: Implement function
for i in range(0, len(words) - sequence_length - 1):
x.append(words[i: i + sequence_length]);
y.append(words[i+sequence_length])
data = TensorDataset(torch.LongTensor(x), torch.LongTensor(y))
data_loader = torch.utils.data.DataLoader(data,
batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
data_loader = batch_data(int_text, 4, 5);
dataiter = iter(data_loader)
batch_x, batch_y = dataiter.next();
print(int_text[:50])
print(batch_x);
print(batch_y);
###Output
[24, 22, 47, 1, 1, 1, 17, 47, 22, 82, 20, 6, 1252, 545, 8782, 7189, 20, 241, 1, 149, 1, 1, 1, 84, 4, 200, 238, 149, 208, 58, 55, 135, 64, 47, 3, 24, 22, 18, 677, 208, 58, 1, 1, 1, 24, 220, 126, 2, 121, 50]
tensor([[ 24, 22, 47, 1],
[ 22, 47, 1, 1],
[ 47, 1, 1, 1],
[ 1, 1, 1, 17],
[ 1, 1, 17, 47]])
tensor([ 1, 1, 17, 47, 22])
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]])
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.hidden_dim = hidden_dim
self.n_layers = n_layers
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
self.dropout = nn.Dropout(0.2)
self.fc = nn.Linear(hidden_dim, output_size)
#self.sigmoid = nn.Sigmoid()
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
lstm_out = self.dropout(lstm_out)
fc_out = self.fc(lstm_out)
#sigmoid_out = self.sigmoid(out)
out = fc_out.view(batch_size, -1, self.output_size)
out = out[:, -1]
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if(train_on_gpu):
rnn.cuda()
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
# zero accumulated gradients
rnn.zero_grad()
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
hidden = tuple([each.data for each in hidden])
output, hidden = rnn(inp, hidden)
loss = criterion(output.squeeze(), target.long())
loss.backward()
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
from datetime import datetime
print("Current Time =", datetime.now())
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
from datetime import datetime
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
epoch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
epoch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {}/{}, Progress in Epoch: {}/{}, Loss: {}. Time = {}'.format(
epoch_i, n_epochs, batch_i, len(train_loader), np.average(batch_losses), datetime.now()))
batch_losses = []
print('Epoch: {}/{}, Complete. AVRG Loss : {}. \n\n'.format(
epoch_i, n_epochs, np.average(epoch_losses)))
epoch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
len(int_text)
# Data params
# Sequence Length
sequence_length = 8 # of words in a sequence
# Batch Size
batch_size = 64
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.0003
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 512
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 400
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
# rnn = helper.load_model('./save/trained_rnn')
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from workspace_utils import active_session
with active_session():
# rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10, Progress in Epoch: 400/13940, Loss: 3.605054897069931. Time = 2020-02-18 12:01:09.591732
Epoch: 1/10, Progress in Epoch: 800/13940, Loss: 3.571416969001293. Time = 2020-02-18 12:01:27.222853
Epoch: 1/10, Progress in Epoch: 1200/13940, Loss: 3.6262794977426527. Time = 2020-02-18 12:01:44.794836
Epoch: 1/10, Progress in Epoch: 1600/13940, Loss: 3.63182344853878. Time = 2020-02-18 12:02:02.386451
Epoch: 1/10, Progress in Epoch: 2000/13940, Loss: 3.567817769944668. Time = 2020-02-18 12:02:19.988084
Epoch: 1/10, Progress in Epoch: 2400/13940, Loss: 3.5950572127103806. Time = 2020-02-18 12:02:37.577387
Epoch: 1/10, Progress in Epoch: 2800/13940, Loss: 3.524261798262596. Time = 2020-02-18 12:02:55.147918
Epoch: 1/10, Progress in Epoch: 3200/13940, Loss: 3.4371326106786726. Time = 2020-02-18 12:03:12.740083
Epoch: 1/10, Progress in Epoch: 3600/13940, Loss: 3.4395338034629823. Time = 2020-02-18 12:03:30.333782
Epoch: 1/10, Progress in Epoch: 4000/13940, Loss: 3.542424525022507. Time = 2020-02-18 12:03:47.914981
Epoch: 1/10, Progress in Epoch: 4400/13940, Loss: 3.612706404328346. Time = 2020-02-18 12:04:05.537210
Epoch: 1/10, Progress in Epoch: 4800/13940, Loss: 3.5227244263887405. Time = 2020-02-18 12:04:23.145631
Epoch: 1/10, Progress in Epoch: 5200/13940, Loss: 3.49071392595768. Time = 2020-02-18 12:04:40.750032
Epoch: 1/10, Progress in Epoch: 5600/13940, Loss: 3.665648688673973. Time = 2020-02-18 12:04:58.582860
Epoch: 1/10, Progress in Epoch: 6000/13940, Loss: 3.718160879611969. Time = 2020-02-18 12:05:16.192778
Epoch: 1/10, Progress in Epoch: 6400/13940, Loss: 3.634050798416138. Time = 2020-02-18 12:05:33.802871
Epoch: 1/10, Progress in Epoch: 6800/13940, Loss: 3.589769725203514. Time = 2020-02-18 12:05:51.402862
Epoch: 1/10, Progress in Epoch: 7200/13940, Loss: 3.4900585186481474. Time = 2020-02-18 12:06:08.998451
Epoch: 1/10, Progress in Epoch: 7600/13940, Loss: 3.6459556722640993. Time = 2020-02-18 12:06:26.617825
Epoch: 1/10, Progress in Epoch: 8000/13940, Loss: 3.428907346725464. Time = 2020-02-18 12:06:44.231577
Epoch: 1/10, Progress in Epoch: 8400/13940, Loss: 3.5053271022439003. Time = 2020-02-18 12:07:01.837052
Epoch: 1/10, Progress in Epoch: 8800/13940, Loss: 3.548306875228882. Time = 2020-02-18 12:07:19.442637
Epoch: 1/10, Progress in Epoch: 9200/13940, Loss: 3.4951591509580613. Time = 2020-02-18 12:07:37.056759
Epoch: 1/10, Progress in Epoch: 9600/13940, Loss: 3.4763741570711137. Time = 2020-02-18 12:07:54.674413
Epoch: 1/10, Progress in Epoch: 10000/13940, Loss: 3.469826722741127. Time = 2020-02-18 12:08:12.286407
Epoch: 1/10, Progress in Epoch: 10400/13940, Loss: 3.619101067185402. Time = 2020-02-18 12:08:29.917437
Epoch: 1/10, Progress in Epoch: 10800/13940, Loss: 3.614825845360756. Time = 2020-02-18 12:08:47.545706
Epoch: 1/10, Progress in Epoch: 11200/13940, Loss: 3.7850104904174806. Time = 2020-02-18 12:09:05.395728
Epoch: 1/10, Progress in Epoch: 11600/13940, Loss: 3.6875649243593216. Time = 2020-02-18 12:09:23.017526
Epoch: 1/10, Progress in Epoch: 12000/13940, Loss: 3.671543435454369. Time = 2020-02-18 12:09:40.633408
Epoch: 1/10, Progress in Epoch: 12400/13940, Loss: 3.6527459222078322. Time = 2020-02-18 12:09:58.233821
Epoch: 1/10, Progress in Epoch: 12800/13940, Loss: 3.661191913485527. Time = 2020-02-18 12:10:15.858400
Epoch: 1/10, Progress in Epoch: 13200/13940, Loss: 3.644480064511299. Time = 2020-02-18 12:10:33.466185
Epoch: 1/10, Progress in Epoch: 13600/13940, Loss: 3.6915398734807967. Time = 2020-02-18 12:10:51.069107
Epoch: 1/10, Complete. AVRG Loss : 3.5862797902538475.
Epoch: 2/10, Progress in Epoch: 400/13940, Loss: 3.566291666159933. Time = 2020-02-18 12:11:23.628535
Epoch: 2/10, Progress in Epoch: 800/13940, Loss: 3.491341718733311. Time = 2020-02-18 12:11:41.232535
Epoch: 2/10, Progress in Epoch: 1200/13940, Loss: 3.5244996482133866. Time = 2020-02-18 12:11:58.863655
Epoch: 2/10, Progress in Epoch: 1600/13940, Loss: 3.537008687853813. Time = 2020-02-18 12:12:16.469568
Epoch: 2/10, Progress in Epoch: 2000/13940, Loss: 3.484640632867813. Time = 2020-02-18 12:12:34.110590
Epoch: 2/10, Progress in Epoch: 2400/13940, Loss: 3.49609339594841. Time = 2020-02-18 12:12:51.749783
Epoch: 2/10, Progress in Epoch: 2800/13940, Loss: 3.441936358809471. Time = 2020-02-18 12:13:09.558525
Epoch: 2/10, Progress in Epoch: 3200/13940, Loss: 3.3490901833772657. Time = 2020-02-18 12:13:27.171983
Epoch: 2/10, Progress in Epoch: 3600/13940, Loss: 3.3565933644771575. Time = 2020-02-18 12:13:44.790880
Epoch: 2/10, Progress in Epoch: 4000/13940, Loss: 3.452413011789322. Time = 2020-02-18 12:14:02.398197
Epoch: 2/10, Progress in Epoch: 4400/13940, Loss: 3.508250970840454. Time = 2020-02-18 12:14:19.998558
Epoch: 2/10, Progress in Epoch: 4800/13940, Loss: 3.437790619134903. Time = 2020-02-18 12:14:37.618669
Epoch: 2/10, Progress in Epoch: 5200/13940, Loss: 3.4107190912961958. Time = 2020-02-18 12:14:55.229646
Epoch: 2/10, Progress in Epoch: 5600/13940, Loss: 3.5701881366968156. Time = 2020-02-18 12:15:12.836225
Epoch: 2/10, Progress in Epoch: 6000/13940, Loss: 3.6215085220336913. Time = 2020-02-18 12:15:30.452934
Epoch: 2/10, Progress in Epoch: 6400/13940, Loss: 3.5377633535861968. Time = 2020-02-18 12:15:48.064833
Epoch: 2/10, Progress in Epoch: 6800/13940, Loss: 3.5052747529745103. Time = 2020-02-18 12:16:05.691164
Epoch: 2/10, Progress in Epoch: 7200/13940, Loss: 3.3933733141422273. Time = 2020-02-18 12:16:23.303511
Epoch: 2/10, Progress in Epoch: 7600/13940, Loss: 3.5405786830186843. Time = 2020-02-18 12:16:40.912407
Epoch: 2/10, Progress in Epoch: 8000/13940, Loss: 3.3607068854570388. Time = 2020-02-18 12:16:58.822231
Epoch: 2/10, Progress in Epoch: 8400/13940, Loss: 3.4140745347738264. Time = 2020-02-18 12:17:16.434415
Epoch: 2/10, Progress in Epoch: 8800/13940, Loss: 3.472627356648445. Time = 2020-02-18 12:17:34.052971
Epoch: 2/10, Progress in Epoch: 9200/13940, Loss: 3.4039964652061463. Time = 2020-02-18 12:17:51.659067
Epoch: 2/10, Progress in Epoch: 9600/13940, Loss: 3.4198771893978117. Time = 2020-02-18 12:18:09.258317
Epoch: 2/10, Progress in Epoch: 10000/13940, Loss: 3.380286959707737. Time = 2020-02-18 12:18:26.873423
Epoch: 2/10, Progress in Epoch: 10400/13940, Loss: 3.543619269132614. Time = 2020-02-18 12:18:44.499123
Epoch: 2/10, Progress in Epoch: 10800/13940, Loss: 3.530225414633751. Time = 2020-02-18 12:19:02.139164
Epoch: 2/10, Progress in Epoch: 11200/13940, Loss: 3.674264022707939. Time = 2020-02-18 12:19:19.756478
Epoch: 2/10, Progress in Epoch: 11600/13940, Loss: 3.613612679839134. Time = 2020-02-18 12:19:37.357876
Epoch: 2/10, Progress in Epoch: 12000/13940, Loss: 3.595281681418419. Time = 2020-02-18 12:19:54.977162
Epoch: 2/10, Progress in Epoch: 12400/13940, Loss: 3.587793853878975. Time = 2020-02-18 12:20:12.575412
Epoch: 2/10, Progress in Epoch: 12800/13940, Loss: 3.5641394674777986. Time = 2020-02-18 12:20:30.223344
Epoch: 2/10, Progress in Epoch: 13200/13940, Loss: 3.5645328480005265. Time = 2020-02-18 12:20:47.860818
Epoch: 2/10, Progress in Epoch: 13600/13940, Loss: 3.612954514026642. Time = 2020-02-18 12:21:05.725547
Epoch: 2/10, Complete. AVRG Loss : 3.498609483571295.
Epoch: 3/10, Progress in Epoch: 400/13940, Loss: 3.5031374241244326. Time = 2020-02-18 12:21:38.252756
Epoch: 3/10, Progress in Epoch: 800/13940, Loss: 3.448405632674694. Time = 2020-02-18 12:21:55.871650
Epoch: 3/10, Progress in Epoch: 1200/13940, Loss: 3.480855928063393. Time = 2020-02-18 12:22:13.469913
Epoch: 3/10, Progress in Epoch: 1600/13940, Loss: 3.486117476224899. Time = 2020-02-18 12:22:31.090728
Epoch: 3/10, Progress in Epoch: 2000/13940, Loss: 3.4260716885328293. Time = 2020-02-18 12:22:48.704472
Epoch: 3/10, Progress in Epoch: 2400/13940, Loss: 3.457034966945648. Time = 2020-02-18 12:23:06.327414
Epoch: 3/10, Progress in Epoch: 2800/13940, Loss: 3.3993911331892015. Time = 2020-02-18 12:23:23.954594
Epoch: 3/10, Progress in Epoch: 3200/13940, Loss: 3.3062002471089365. Time = 2020-02-18 12:23:41.581893
Epoch: 3/10, Progress in Epoch: 3600/13940, Loss: 3.314637385010719. Time = 2020-02-18 12:23:59.221918
Epoch: 3/10, Progress in Epoch: 4000/13940, Loss: 3.408686335682869. Time = 2020-02-18 12:24:16.858855
Epoch: 3/10, Progress in Epoch: 4400/13940, Loss: 3.4719362276792527. Time = 2020-02-18 12:24:34.479498
Epoch: 3/10, Progress in Epoch: 4800/13940, Loss: 3.383262341618538. Time = 2020-02-18 12:24:52.333150
Epoch: 3/10, Progress in Epoch: 5200/13940, Loss: 3.370024404525757. Time = 2020-02-18 12:25:09.942362
Epoch: 3/10, Progress in Epoch: 5600/13940, Loss: 3.5325735819339754. Time = 2020-02-18 12:25:27.583553
Epoch: 3/10, Progress in Epoch: 6000/13940, Loss: 3.57635999083519. Time = 2020-02-18 12:25:45.197884
Epoch: 3/10, Progress in Epoch: 6400/13940, Loss: 3.501342144012451. Time = 2020-02-18 12:26:02.832990
Epoch: 3/10, Progress in Epoch: 6800/13940, Loss: 3.466717317700386. Time = 2020-02-18 12:26:20.455457
Epoch: 3/10, Progress in Epoch: 7200/13940, Loss: 3.350389167070389. Time = 2020-02-18 12:26:38.067786
Epoch: 3/10, Progress in Epoch: 7600/13940, Loss: 3.4917218190431596. Time = 2020-02-18 12:26:55.690710
Epoch: 3/10, Progress in Epoch: 8000/13940, Loss: 3.3131301873922347. Time = 2020-02-18 12:27:13.319596
Epoch: 3/10, Progress in Epoch: 8400/13940, Loss: 3.35322684854269. Time = 2020-02-18 12:27:30.946262
Epoch: 3/10, Progress in Epoch: 8800/13940, Loss: 3.41247697532177. Time = 2020-02-18 12:27:48.566585
Epoch: 3/10, Progress in Epoch: 9200/13940, Loss: 3.3569095546007155. Time = 2020-02-18 12:28:06.199323
Epoch: 3/10, Progress in Epoch: 9600/13940, Loss: 3.3754072308540346. Time = 2020-02-18 12:28:23.849517
Epoch: 3/10, Progress in Epoch: 10000/13940, Loss: 3.333665554225445. Time = 2020-02-18 12:28:41.482029
Epoch: 3/10, Progress in Epoch: 10400/13940, Loss: 3.496324677467346. Time = 2020-02-18 12:28:59.311515
Epoch: 3/10, Progress in Epoch: 10800/13940, Loss: 3.489862497448921. Time = 2020-02-18 12:29:16.922888
Epoch: 3/10, Progress in Epoch: 11200/13940, Loss: 3.61966025531292. Time = 2020-02-18 12:29:34.538748
Epoch: 3/10, Progress in Epoch: 11600/13940, Loss: 3.568893506526947. Time = 2020-02-18 12:29:52.156655
Epoch: 3/10, Progress in Epoch: 12000/13940, Loss: 3.5405802798271178. Time = 2020-02-18 12:30:09.779412
Epoch: 3/10, Progress in Epoch: 12400/13940, Loss: 3.5214705842733385. Time = 2020-02-18 12:30:27.380147
Epoch: 3/10, Progress in Epoch: 12800/13940, Loss: 3.524023380279541. Time = 2020-02-18 12:30:44.983281
Epoch: 3/10, Progress in Epoch: 13200/13940, Loss: 3.5160969358682634. Time = 2020-02-18 12:31:02.621368
Epoch: 3/10, Progress in Epoch: 13600/13940, Loss: 3.558420597910881. Time = 2020-02-18 12:31:20.229090
Epoch: 3/10, Complete. AVRG Loss : 3.4513036534933152.
Epoch: 4/10, Progress in Epoch: 400/13940, Loss: 3.447636641088132. Time = 2020-02-18 12:31:52.761734
Epoch: 4/10, Progress in Epoch: 800/13940, Loss: 3.4203410986065865. Time = 2020-02-18 12:32:10.363003
Epoch: 4/10, Progress in Epoch: 1200/13940, Loss: 3.4442234909534455. Time = 2020-02-18 12:32:27.989534
Epoch: 4/10, Progress in Epoch: 1600/13940, Loss: 3.462333211302757. Time = 2020-02-18 12:32:45.590862
Epoch: 4/10, Progress in Epoch: 2000/13940, Loss: 3.392588405907154. Time = 2020-02-18 12:33:03.365817
Epoch: 4/10, Progress in Epoch: 2400/13940, Loss: 3.4093611109256745. Time = 2020-02-18 12:33:20.991110
Epoch: 4/10, Progress in Epoch: 2800/13940, Loss: 3.3776717972755432. Time = 2020-02-18 12:33:38.613449
Epoch: 4/10, Progress in Epoch: 3200/13940, Loss: 3.262619821727276. Time = 2020-02-18 12:33:56.228927
Epoch: 4/10, Progress in Epoch: 3600/13940, Loss: 3.2774949631094934. Time = 2020-02-18 12:34:13.846927
Epoch: 4/10, Progress in Epoch: 4000/13940, Loss: 3.3836532151699066. Time = 2020-02-18 12:34:31.462323
Epoch: 4/10, Progress in Epoch: 4400/13940, Loss: 3.444744099974632. Time = 2020-02-18 12:34:49.070401
Epoch: 4/10, Progress in Epoch: 4800/13940, Loss: 3.3544535505771638. Time = 2020-02-18 12:35:06.688854
Epoch: 4/10, Progress in Epoch: 5200/13940, Loss: 3.345361199378967. Time = 2020-02-18 12:35:24.289325
Epoch: 4/10, Progress in Epoch: 5600/13940, Loss: 3.504314076602459. Time = 2020-02-18 12:35:41.907953
Epoch: 4/10, Progress in Epoch: 6000/13940, Loss: 3.5284016239643097. Time = 2020-02-18 12:35:59.519496
Epoch: 4/10, Progress in Epoch: 6400/13940, Loss: 3.4650064414739608. Time = 2020-02-18 12:36:17.132119
Epoch: 4/10, Progress in Epoch: 6800/13940, Loss: 3.4362853527069093. Time = 2020-02-18 12:36:34.758740
Epoch: 4/10, Progress in Epoch: 7200/13940, Loss: 3.3104356861114503. Time = 2020-02-18 12:36:52.516510
Epoch: 4/10, Progress in Epoch: 7600/13940, Loss: 3.457484288215637. Time = 2020-02-18 12:37:10.121406
Epoch: 4/10, Progress in Epoch: 8000/13940, Loss: 3.28550971865654. Time = 2020-02-18 12:37:27.729468
Epoch: 4/10, Progress in Epoch: 8400/13940, Loss: 3.3187692767381667. Time = 2020-02-18 12:37:45.359397
Epoch: 4/10, Progress in Epoch: 8800/13940, Loss: 3.3853594183921816. Time = 2020-02-18 12:38:02.986054
Epoch: 4/10, Progress in Epoch: 9200/13940, Loss: 3.328881596326828. Time = 2020-02-18 12:38:20.595988
Epoch: 4/10, Progress in Epoch: 9600/13940, Loss: 3.354059534072876. Time = 2020-02-18 12:38:38.206016
Epoch: 4/10, Progress in Epoch: 10000/13940, Loss: 3.3002479714155197. Time = 2020-02-18 12:38:55.811125
Epoch: 4/10, Progress in Epoch: 10400/13940, Loss: 3.4738124850392342. Time = 2020-02-18 12:39:13.407891
Epoch: 4/10, Progress in Epoch: 10800/13940, Loss: 3.4687465119361875. Time = 2020-02-18 12:39:31.031487
Epoch: 4/10, Progress in Epoch: 11200/13940, Loss: 3.590900978446007. Time = 2020-02-18 12:39:48.647307
Epoch: 4/10, Progress in Epoch: 11600/13940, Loss: 3.5221056115627287. Time = 2020-02-18 12:40:06.263333
Epoch: 4/10, Progress in Epoch: 12000/13940, Loss: 3.485120722055435. Time = 2020-02-18 12:40:23.880699
Epoch: 4/10, Progress in Epoch: 12400/13940, Loss: 3.4905200719833376. Time = 2020-02-18 12:40:41.475225
Epoch: 4/10, Progress in Epoch: 12800/13940, Loss: 3.4802138659358026. Time = 2020-02-18 12:40:59.309151
Epoch: 4/10, Progress in Epoch: 13200/13940, Loss: 3.492168377041817. Time = 2020-02-18 12:41:16.917540
Epoch: 4/10, Progress in Epoch: 13600/13940, Loss: 3.513845224380493. Time = 2020-02-18 12:41:34.543631
Epoch: 4/10, Complete. AVRG Loss : 3.417355781588769.
Epoch: 5/10, Progress in Epoch: 400/13940, Loss: 3.4108232732393424. Time = 2020-02-18 12:42:07.084957
Epoch: 5/10, Progress in Epoch: 800/13940, Loss: 3.3872651305794714. Time = 2020-02-18 12:42:24.701399
Epoch: 5/10, Progress in Epoch: 1200/13940, Loss: 3.4272228586673736. Time = 2020-02-18 12:42:42.308570
Epoch: 5/10, Progress in Epoch: 1600/13940, Loss: 3.427277734279633. Time = 2020-02-18 12:42:59.929482
Epoch: 5/10, Progress in Epoch: 2000/13940, Loss: 3.361164151132107. Time = 2020-02-18 12:43:17.531985
Epoch: 5/10, Progress in Epoch: 2400/13940, Loss: 3.3849739098548888. Time = 2020-02-18 12:43:35.161836
Epoch: 5/10, Progress in Epoch: 2800/13940, Loss: 3.344558856487274. Time = 2020-02-18 12:43:52.768950
Epoch: 5/10, Progress in Epoch: 3200/13940, Loss: 3.229952166378498. Time = 2020-02-18 12:44:10.383083
Epoch: 5/10, Progress in Epoch: 3600/13940, Loss: 3.2592759209871294. Time = 2020-02-18 12:44:28.019061
Epoch: 5/10, Progress in Epoch: 4000/13940, Loss: 3.3515184247493743. Time = 2020-02-18 12:44:45.621950
Epoch: 5/10, Progress in Epoch: 4400/13940, Loss: 3.414866480231285. Time = 2020-02-18 12:45:03.441324
Epoch: 5/10, Progress in Epoch: 4800/13940, Loss: 3.322731137871742. Time = 2020-02-18 12:45:21.060549
Epoch: 5/10, Progress in Epoch: 5200/13940, Loss: 3.3129196214675902. Time = 2020-02-18 12:45:38.679244
Epoch: 5/10, Progress in Epoch: 5600/13940, Loss: 3.479432844519615. Time = 2020-02-18 12:45:56.298598
Epoch: 5/10, Progress in Epoch: 6000/13940, Loss: 3.501705523133278. Time = 2020-02-18 12:46:13.921029
Epoch: 5/10, Progress in Epoch: 6400/13940, Loss: 3.435933470129967. Time = 2020-02-18 12:46:31.518693
Epoch: 5/10, Progress in Epoch: 6800/13940, Loss: 3.4150149637460707. Time = 2020-02-18 12:46:49.154495
Epoch: 5/10, Progress in Epoch: 7200/13940, Loss: 3.2944598776102065. Time = 2020-02-18 12:47:06.770693
Epoch: 5/10, Progress in Epoch: 7600/13940, Loss: 3.4326196336746215. Time = 2020-02-18 12:47:24.369967
Epoch: 5/10, Progress in Epoch: 8000/13940, Loss: 3.274423359632492. Time = 2020-02-18 12:47:41.987533
Epoch: 5/10, Progress in Epoch: 8400/13940, Loss: 3.297988620698452. Time = 2020-02-18 12:47:59.574118
Epoch: 5/10, Progress in Epoch: 8800/13940, Loss: 3.377900887131691. Time = 2020-02-18 12:48:17.188768
Epoch: 5/10, Progress in Epoch: 9200/13940, Loss: 3.3018263867497444. Time = 2020-02-18 12:48:34.799148
Epoch: 5/10, Progress in Epoch: 9600/13940, Loss: 3.3156812340021133. Time = 2020-02-18 12:48:52.603325
Epoch: 5/10, Progress in Epoch: 10000/13940, Loss: 3.275946944952011. Time = 2020-02-18 12:49:10.216602
Epoch: 5/10, Progress in Epoch: 10400/13940, Loss: 3.440433329343796. Time = 2020-02-18 12:49:27.829874
Epoch: 5/10, Progress in Epoch: 10800/13940, Loss: 3.444994344115257. Time = 2020-02-18 12:49:45.435158
Epoch: 5/10, Progress in Epoch: 11200/13940, Loss: 3.55953544318676. Time = 2020-02-18 12:50:03.049835
Epoch: 5/10, Progress in Epoch: 11600/13940, Loss: 3.495890570282936. Time = 2020-02-18 12:50:20.665894
Epoch: 5/10, Progress in Epoch: 12000/13940, Loss: 3.465507603883743. Time = 2020-02-18 12:50:38.261543
Epoch: 5/10, Progress in Epoch: 12400/13940, Loss: 3.4655830842256545. Time = 2020-02-18 12:50:55.881063
Epoch: 5/10, Progress in Epoch: 12800/13940, Loss: 3.4481960052251814. Time = 2020-02-18 12:51:13.493568
Epoch: 5/10, Progress in Epoch: 13200/13940, Loss: 3.4673463493585586. Time = 2020-02-18 12:51:31.126223
Epoch: 5/10, Progress in Epoch: 13600/13940, Loss: 3.4747747099399566. Time = 2020-02-18 12:51:48.729825
Epoch: 5/10, Complete. AVRG Loss : 3.3908053439336885.
Epoch: 6/10, Progress in Epoch: 400/13940, Loss: 3.3828285956737636. Time = 2020-02-18 12:52:21.296581
Epoch: 6/10, Progress in Epoch: 800/13940, Loss: 3.3763101053237916. Time = 2020-02-18 12:52:38.920644
Epoch: 6/10, Progress in Epoch: 1200/13940, Loss: 3.4091614985466006. Time = 2020-02-18 12:52:56.750758
Epoch: 6/10, Progress in Epoch: 1600/13940, Loss: 3.3972772747278213. Time = 2020-02-18 12:53:14.379473
Epoch: 6/10, Progress in Epoch: 2000/13940, Loss: 3.335883587896824. Time = 2020-02-18 12:53:32.024724
Epoch: 6/10, Progress in Epoch: 2400/13940, Loss: 3.3623815125226972. Time = 2020-02-18 12:53:49.645033
Epoch: 6/10, Progress in Epoch: 2800/13940, Loss: 3.335407648086548. Time = 2020-02-18 12:54:07.276401
Epoch: 6/10, Progress in Epoch: 3200/13940, Loss: 3.2287586975097655. Time = 2020-02-18 12:54:24.888284
Epoch: 6/10, Progress in Epoch: 3600/13940, Loss: 3.243279631435871. Time = 2020-02-18 12:54:42.513079
Epoch: 6/10, Progress in Epoch: 4000/13940, Loss: 3.342167426943779. Time = 2020-02-18 12:55:00.135712
Epoch: 6/10, Progress in Epoch: 4400/13940, Loss: 3.40387444794178. Time = 2020-02-18 12:55:17.727636
Epoch: 6/10, Progress in Epoch: 4800/13940, Loss: 3.3219415980577467. Time = 2020-02-18 12:55:35.351464
Epoch: 6/10, Progress in Epoch: 5200/13940, Loss: 3.2969096744060518. Time = 2020-02-18 12:55:52.948383
Epoch: 6/10, Progress in Epoch: 5600/13940, Loss: 3.452347872853279. Time = 2020-02-18 12:56:10.599882
Epoch: 6/10, Progress in Epoch: 6000/13940, Loss: 3.4840889954566956. Time = 2020-02-18 12:56:28.213041
Epoch: 6/10, Progress in Epoch: 6400/13940, Loss: 3.4170104521512985. Time = 2020-02-18 12:56:45.815773
Epoch: 6/10, Progress in Epoch: 6800/13940, Loss: 3.398458643555641. Time = 2020-02-18 12:57:03.624248
Epoch: 6/10, Progress in Epoch: 7200/13940, Loss: 3.2693808060884475. Time = 2020-02-18 12:57:21.234502
Epoch: 6/10, Progress in Epoch: 7600/13940, Loss: 3.4141890144348146. Time = 2020-02-18 12:57:38.874471
Epoch: 6/10, Progress in Epoch: 8000/13940, Loss: 3.2552785363793375. Time = 2020-02-18 12:57:56.477082
Epoch: 6/10, Progress in Epoch: 8400/13940, Loss: 3.2764982852339744. Time = 2020-02-18 12:58:14.097678
Epoch: 6/10, Progress in Epoch: 8800/13940, Loss: 3.34302699893713. Time = 2020-02-18 12:58:31.742772
Epoch: 6/10, Progress in Epoch: 9200/13940, Loss: 3.2825796875357627. Time = 2020-02-18 12:58:49.366781
Epoch: 6/10, Progress in Epoch: 9600/13940, Loss: 3.2935839653015138. Time = 2020-02-18 12:59:06.992827
Epoch: 6/10, Progress in Epoch: 10000/13940, Loss: 3.257418552339077. Time = 2020-02-18 12:59:24.635777
Epoch: 6/10, Progress in Epoch: 10400/13940, Loss: 3.4252628538012506. Time = 2020-02-18 12:59:42.274283
Epoch: 6/10, Progress in Epoch: 10800/13940, Loss: 3.4205311810970307. Time = 2020-02-18 12:59:59.922306
Epoch: 6/10, Progress in Epoch: 11200/13940, Loss: 3.531415318250656. Time = 2020-02-18 13:00:17.580545
Epoch: 6/10, Progress in Epoch: 11600/13940, Loss: 3.4643097096681594. Time = 2020-02-18 13:00:35.244050
Epoch: 6/10, Progress in Epoch: 12000/13940, Loss: 3.4401284140348434. Time = 2020-02-18 13:00:53.084721
Epoch: 6/10, Progress in Epoch: 12400/13940, Loss: 3.4575736439228058. Time = 2020-02-18 13:01:10.688277
Epoch: 6/10, Progress in Epoch: 12800/13940, Loss: 3.4228893661499025. Time = 2020-02-18 13:01:28.325460
Epoch: 6/10, Progress in Epoch: 13200/13940, Loss: 3.44178814470768. Time = 2020-02-18 13:01:45.953812
Epoch: 6/10, Progress in Epoch: 13600/13940, Loss: 3.431964892745018. Time = 2020-02-18 13:02:03.561575
Epoch: 6/10, Complete. AVRG Loss : 3.3707824770300516.
Epoch: 7/10, Progress in Epoch: 400/13940, Loss: 3.3626325312744783. Time = 2020-02-18 13:02:36.134631
Epoch: 7/10, Progress in Epoch: 800/13940, Loss: 3.358048400878906. Time = 2020-02-18 13:02:53.771417
Epoch: 7/10, Progress in Epoch: 1200/13940, Loss: 3.3932857447862625. Time = 2020-02-18 13:03:11.384532
Epoch: 7/10, Progress in Epoch: 1600/13940, Loss: 3.3885148537158964. Time = 2020-02-18 13:03:29.021403
Epoch: 7/10, Progress in Epoch: 2000/13940, Loss: 3.314033052921295. Time = 2020-02-18 13:03:46.651835
Epoch: 7/10, Progress in Epoch: 2400/13940, Loss: 3.350456181764603. Time = 2020-02-18 13:04:04.297637
Epoch: 7/10, Progress in Epoch: 2800/13940, Loss: 3.305575399696827. Time = 2020-02-18 13:04:21.949355
Epoch: 7/10, Progress in Epoch: 3200/13940, Loss: 3.2123086738586424. Time = 2020-02-18 13:04:39.581789
Epoch: 7/10, Progress in Epoch: 3600/13940, Loss: 3.2066237625479697. Time = 2020-02-18 13:04:57.367528
Epoch: 7/10, Progress in Epoch: 4000/13940, Loss: 3.333804697394371. Time = 2020-02-18 13:05:15.003218
Epoch: 7/10, Progress in Epoch: 4400/13940, Loss: 3.363949829339981. Time = 2020-02-18 13:05:32.638921
Epoch: 7/10, Progress in Epoch: 4800/13940, Loss: 3.2942299526929855. Time = 2020-02-18 13:05:50.263616
Epoch: 7/10, Progress in Epoch: 5200/13940, Loss: 3.2856011813879014. Time = 2020-02-18 13:06:07.894425
Epoch: 7/10, Progress in Epoch: 5600/13940, Loss: 3.43520455121994. Time = 2020-02-18 13:06:25.493539
Epoch: 7/10, Progress in Epoch: 6000/13940, Loss: 3.457996132969856. Time = 2020-02-18 13:06:43.100016
Epoch: 7/10, Progress in Epoch: 6400/13940, Loss: 3.3964185202121735. Time = 2020-02-18 13:07:00.709251
Epoch: 7/10, Progress in Epoch: 6800/13940, Loss: 3.376386004090309. Time = 2020-02-18 13:07:18.330281
Epoch: 7/10, Progress in Epoch: 7200/13940, Loss: 3.2651589387655258. Time = 2020-02-18 13:07:35.953452
Epoch: 7/10, Progress in Epoch: 7600/13940, Loss: 3.3859938365221023. Time = 2020-02-18 13:07:53.570393
Epoch: 7/10, Progress in Epoch: 8000/13940, Loss: 3.241196416914463. Time = 2020-02-18 13:08:11.197552
Epoch: 7/10, Progress in Epoch: 8400/13940, Loss: 3.2484081745147706. Time = 2020-02-18 13:08:28.809354
Epoch: 7/10, Progress in Epoch: 8800/13940, Loss: 3.316608633995056. Time = 2020-02-18 13:08:46.426644
Epoch: 7/10, Progress in Epoch: 9200/13940, Loss: 3.2708351898193357. Time = 2020-02-18 13:09:04.362867
Epoch: 7/10, Progress in Epoch: 9600/13940, Loss: 3.2808942264318466. Time = 2020-02-18 13:09:21.981876
Epoch: 7/10, Progress in Epoch: 10000/13940, Loss: 3.234231291115284. Time = 2020-02-18 13:09:39.602579
Epoch: 7/10, Progress in Epoch: 10400/13940, Loss: 3.3981568828225135. Time = 2020-02-18 13:09:57.224116
Epoch: 7/10, Progress in Epoch: 10800/13940, Loss: 3.4003751373291013. Time = 2020-02-18 13:10:14.853034
Epoch: 7/10, Progress in Epoch: 11200/13940, Loss: 3.515982626080513. Time = 2020-02-18 13:10:32.476774
Epoch: 7/10, Progress in Epoch: 11600/13940, Loss: 3.453178927898407. Time = 2020-02-18 13:10:50.110717
Epoch: 7/10, Progress in Epoch: 12000/13940, Loss: 3.4081598049402237. Time = 2020-02-18 13:11:07.726935
Epoch: 7/10, Progress in Epoch: 12400/13940, Loss: 3.4288965541124345. Time = 2020-02-18 13:11:25.355465
Epoch: 7/10, Progress in Epoch: 12800/13940, Loss: 3.4012474930286407. Time = 2020-02-18 13:11:42.966442
Epoch: 7/10, Progress in Epoch: 13200/13940, Loss: 3.417665944099426. Time = 2020-02-18 13:12:00.592392
Epoch: 7/10, Progress in Epoch: 13600/13940, Loss: 3.4246517407894133. Time = 2020-02-18 13:12:18.213411
Epoch: 7/10, Complete. AVRG Loss : 3.3504397688743945.
Epoch: 8/10, Progress in Epoch: 400/13940, Loss: 3.344726390864433. Time = 2020-02-18 13:12:50.767028
Epoch: 8/10, Progress in Epoch: 800/13940, Loss: 3.3338813084363936. Time = 2020-02-18 13:13:08.864565
Epoch: 8/10, Progress in Epoch: 1200/13940, Loss: 3.3693611550331117. Time = 2020-02-18 13:13:26.488860
Epoch: 8/10, Progress in Epoch: 1600/13940, Loss: 3.3748216849565504. Time = 2020-02-18 13:13:44.118674
Epoch: 8/10, Progress in Epoch: 2000/13940, Loss: 3.2970508444309234. Time = 2020-02-18 13:14:01.748374
Epoch: 8/10, Progress in Epoch: 2400/13940, Loss: 3.3394521194696427. Time = 2020-02-18 13:14:19.364802
Epoch: 8/10, Progress in Epoch: 2800/13940, Loss: 3.3004163920879366. Time = 2020-02-18 13:14:36.982610
Epoch: 8/10, Progress in Epoch: 3200/13940, Loss: 3.1849263501167298. Time = 2020-02-18 13:14:54.626321
Epoch: 8/10, Progress in Epoch: 3600/13940, Loss: 3.209246135354042. Time = 2020-02-18 13:15:12.275440
Epoch: 8/10, Progress in Epoch: 4000/13940, Loss: 3.314252491593361. Time = 2020-02-18 13:15:29.892410
Epoch: 8/10, Progress in Epoch: 4400/13940, Loss: 3.3598943465948103. Time = 2020-02-18 13:15:47.500806
Epoch: 8/10, Progress in Epoch: 4800/13940, Loss: 3.273265761733055. Time = 2020-02-18 13:16:05.116953
Epoch: 8/10, Progress in Epoch: 5200/13940, Loss: 3.2735425460338594. Time = 2020-02-18 13:16:22.732951
Epoch: 8/10, Progress in Epoch: 5600/13940, Loss: 3.400097432434559. Time = 2020-02-18 13:16:40.343120
Epoch: 8/10, Progress in Epoch: 6000/13940, Loss: 3.4398484522104265. Time = 2020-02-18 13:16:58.187094
Epoch: 8/10, Progress in Epoch: 6400/13940, Loss: 3.386214165687561. Time = 2020-02-18 13:17:15.799178
Epoch: 8/10, Progress in Epoch: 6800/13940, Loss: 3.3728552544116974. Time = 2020-02-18 13:17:33.419174
Epoch: 8/10, Progress in Epoch: 7200/13940, Loss: 3.245937911272049. Time = 2020-02-18 13:17:51.014402
Epoch: 8/10, Progress in Epoch: 7600/13940, Loss: 3.3687852799892424. Time = 2020-02-18 13:18:08.627643
Epoch: 8/10, Progress in Epoch: 8000/13940, Loss: 3.2030282524228095. Time = 2020-02-18 13:18:26.261875
Epoch: 8/10, Progress in Epoch: 8400/13940, Loss: 3.2398369708657264. Time = 2020-02-18 13:18:43.864796
Epoch: 8/10, Progress in Epoch: 8800/13940, Loss: 3.304183844923973. Time = 2020-02-18 13:19:01.474309
Epoch: 8/10, Progress in Epoch: 9200/13940, Loss: 3.254605756402016. Time = 2020-02-18 13:19:19.075208
Epoch: 8/10, Progress in Epoch: 9600/13940, Loss: 3.257960031032562. Time = 2020-02-18 13:19:36.682979
Epoch: 8/10, Progress in Epoch: 10000/13940, Loss: 3.2214042010903357. Time = 2020-02-18 13:19:54.310548
Epoch: 8/10, Progress in Epoch: 10400/13940, Loss: 3.3710079535841944. Time = 2020-02-18 13:20:11.917438
Epoch: 8/10, Progress in Epoch: 10800/13940, Loss: 3.3888149678707125. Time = 2020-02-18 13:20:29.537910
Epoch: 8/10, Progress in Epoch: 11200/13940, Loss: 3.4792627054452896. Time = 2020-02-18 13:20:47.176811
Epoch: 8/10, Progress in Epoch: 11600/13940, Loss: 3.420331249833107. Time = 2020-02-18 13:21:04.966884
Epoch: 8/10, Progress in Epoch: 12000/13940, Loss: 3.4004440504312514. Time = 2020-02-18 13:21:22.572464
Epoch: 8/10, Progress in Epoch: 12400/13940, Loss: 3.397537671327591. Time = 2020-02-18 13:21:40.185981
Epoch: 8/10, Progress in Epoch: 12800/13940, Loss: 3.3923223036527634. Time = 2020-02-18 13:21:57.798712
Epoch: 8/10, Progress in Epoch: 13200/13940, Loss: 3.397378239035606. Time = 2020-02-18 13:22:15.422006
Epoch: 8/10, Progress in Epoch: 13600/13940, Loss: 3.3953550881147385. Time = 2020-02-18 13:22:33.054845
Epoch: 8/10, Complete. AVRG Loss : 3.332282440384779.
Epoch: 9/10, Progress in Epoch: 400/13940, Loss: 3.318012382245354. Time = 2020-02-18 13:23:05.623474
Epoch: 9/10, Progress in Epoch: 800/13940, Loss: 3.3225846028327943. Time = 2020-02-18 13:23:23.247614
Epoch: 9/10, Progress in Epoch: 1200/13940, Loss: 3.3539452731609343. Time = 2020-02-18 13:23:40.893826
Epoch: 9/10, Progress in Epoch: 1600/13940, Loss: 3.347273669242859. Time = 2020-02-18 13:23:58.521997
Epoch: 9/10, Progress in Epoch: 2000/13940, Loss: 3.2863294103741647. Time = 2020-02-18 13:24:16.129619
Epoch: 9/10, Progress in Epoch: 2400/13940, Loss: 3.3139397406578066. Time = 2020-02-18 13:24:33.763862
Epoch: 9/10, Progress in Epoch: 2800/13940, Loss: 3.284566843211651. Time = 2020-02-18 13:24:51.391791
Epoch: 9/10, Progress in Epoch: 3200/13940, Loss: 3.1677611231803895. Time = 2020-02-18 13:25:09.211742
Epoch: 9/10, Progress in Epoch: 3600/13940, Loss: 3.189292680621147. Time = 2020-02-18 13:25:26.837927
Epoch: 9/10, Progress in Epoch: 4000/13940, Loss: 3.3176461908221246. Time = 2020-02-18 13:25:44.466706
Epoch: 9/10, Progress in Epoch: 4400/13940, Loss: 3.3321087276935577. Time = 2020-02-18 13:26:02.094395
Epoch: 9/10, Progress in Epoch: 4800/13940, Loss: 3.2616928571462633. Time = 2020-02-18 13:26:19.711131
Epoch: 9/10, Progress in Epoch: 5200/13940, Loss: 3.2561175739765167. Time = 2020-02-18 13:26:37.328976
Epoch: 9/10, Progress in Epoch: 5600/13940, Loss: 3.374616189301014. Time = 2020-02-18 13:26:54.933078
Epoch: 9/10, Progress in Epoch: 6000/13940, Loss: 3.4272401881217958. Time = 2020-02-18 13:27:12.561818
Epoch: 9/10, Progress in Epoch: 6400/13940, Loss: 3.3726720744371415. Time = 2020-02-18 13:27:30.174654
Epoch: 9/10, Progress in Epoch: 6800/13940, Loss: 3.3509815526008606. Time = 2020-02-18 13:27:47.785067
Epoch: 9/10, Progress in Epoch: 7200/13940, Loss: 3.232228834629059. Time = 2020-02-18 13:28:05.399055
Epoch: 9/10, Progress in Epoch: 7600/13940, Loss: 3.3554340386390686. Time = 2020-02-18 13:28:23.006537
Epoch: 9/10, Progress in Epoch: 8000/13940, Loss: 3.20594984382391. Time = 2020-02-18 13:28:40.619984
Epoch: 9/10, Progress in Epoch: 8400/13940, Loss: 3.2213180661201477. Time = 2020-02-18 13:28:58.443809
Epoch: 9/10, Progress in Epoch: 8800/13940, Loss: 3.282545200586319. Time = 2020-02-18 13:29:16.080800
Epoch: 9/10, Progress in Epoch: 9200/13940, Loss: 3.2314607438445093. Time = 2020-02-18 13:29:33.714896
Epoch: 9/10, Progress in Epoch: 9600/13940, Loss: 3.242471357584. Time = 2020-02-18 13:29:51.337772
Epoch: 9/10, Progress in Epoch: 10000/13940, Loss: 3.2015815353393555. Time = 2020-02-18 13:30:08.964297
Epoch: 9/10, Progress in Epoch: 10400/13940, Loss: 3.3652059069275855. Time = 2020-02-18 13:30:26.584001
Epoch: 9/10, Progress in Epoch: 10800/13940, Loss: 3.3702369326353074. Time = 2020-02-18 13:30:44.225479
Epoch: 9/10, Progress in Epoch: 11200/13940, Loss: 3.4617551904916763. Time = 2020-02-18 13:31:01.863130
Epoch: 9/10, Progress in Epoch: 11600/13940, Loss: 3.4118455094099045. Time = 2020-02-18 13:31:19.480164
Epoch: 9/10, Progress in Epoch: 12000/13940, Loss: 3.3820173555612563. Time = 2020-02-18 13:31:37.074764
Epoch: 9/10, Progress in Epoch: 12400/13940, Loss: 3.401015085577965. Time = 2020-02-18 13:31:54.694958
Epoch: 9/10, Progress in Epoch: 12800/13940, Loss: 3.376161198616028. Time = 2020-02-18 13:32:12.309701
Epoch: 9/10, Progress in Epoch: 13200/13940, Loss: 3.3795107650756835. Time = 2020-02-18 13:32:29.931118
Epoch: 9/10, Progress in Epoch: 13600/13940, Loss: 3.3800493067502977. Time = 2020-02-18 13:32:47.528931
Epoch: 9/10, Complete. AVRG Loss : 3.3165540001606.
Epoch: 10/10, Progress in Epoch: 400/13940, Loss: 3.3039505770467774. Time = 2020-02-18 13:33:20.403269
Epoch: 10/10, Progress in Epoch: 800/13940, Loss: 3.307705990076065. Time = 2020-02-18 13:33:37.989985
Epoch: 10/10, Progress in Epoch: 1200/13940, Loss: 3.3366662749648093. Time = 2020-02-18 13:33:55.601742
Epoch: 10/10, Progress in Epoch: 1600/13940, Loss: 3.343503560423851. Time = 2020-02-18 13:34:13.210332
Epoch: 10/10, Progress in Epoch: 2000/13940, Loss: 3.265174173116684. Time = 2020-02-18 13:34:30.832761
Epoch: 10/10, Progress in Epoch: 2400/13940, Loss: 3.3095200884342195. Time = 2020-02-18 13:34:48.442800
Epoch: 10/10, Progress in Epoch: 2800/13940, Loss: 3.2743437337875365. Time = 2020-02-18 13:35:06.054073
Epoch: 10/10, Progress in Epoch: 3200/13940, Loss: 3.15618098795414. Time = 2020-02-18 13:35:23.660255
Epoch: 10/10, Progress in Epoch: 3600/13940, Loss: 3.1697662022709845. Time = 2020-02-18 13:35:41.279366
Epoch: 10/10, Progress in Epoch: 4000/13940, Loss: 3.295374717116356. Time = 2020-02-18 13:35:58.894276
Epoch: 10/10, Progress in Epoch: 4400/13940, Loss: 3.315100667476654. Time = 2020-02-18 13:36:16.521167
Epoch: 10/10, Progress in Epoch: 4800/13940, Loss: 3.241660770773888. Time = 2020-02-18 13:36:34.159783
Epoch: 10/10, Progress in Epoch: 5200/13940, Loss: 3.2415979713201524. Time = 2020-02-18 13:36:51.763240
Epoch: 10/10, Progress in Epoch: 5600/13940, Loss: 3.365173881649971. Time = 2020-02-18 13:37:09.572336
Epoch: 10/10, Progress in Epoch: 6000/13940, Loss: 3.40133036673069. Time = 2020-02-18 13:37:27.217169
Epoch: 10/10, Progress in Epoch: 6400/13940, Loss: 3.3609454107284544. Time = 2020-02-18 13:37:44.834047
Epoch: 10/10, Progress in Epoch: 6800/13940, Loss: 3.3211549481749536. Time = 2020-02-18 13:38:02.470504
Epoch: 10/10, Progress in Epoch: 7200/13940, Loss: 3.2221082133054733. Time = 2020-02-18 13:38:20.087559
Epoch: 10/10, Progress in Epoch: 7600/13940, Loss: 3.344998365044594. Time = 2020-02-18 13:38:37.728510
Epoch: 10/10, Progress in Epoch: 8000/13940, Loss: 3.1881481409072876. Time = 2020-02-18 13:38:55.358886
Epoch: 10/10, Progress in Epoch: 8400/13940, Loss: 3.2015286061167716. Time = 2020-02-18 13:39:12.978595
Epoch: 10/10, Progress in Epoch: 8800/13940, Loss: 3.268630896806717. Time = 2020-02-18 13:39:30.612609
Epoch: 10/10, Progress in Epoch: 9200/13940, Loss: 3.2269287917017935. Time = 2020-02-18 13:39:48.226728
Epoch: 10/10, Progress in Epoch: 9600/13940, Loss: 3.2328739160299302. Time = 2020-02-18 13:40:05.837411
Epoch: 10/10, Progress in Epoch: 10000/13940, Loss: 3.1850824117660523. Time = 2020-02-18 13:40:23.438273
Epoch: 10/10, Progress in Epoch: 10400/13940, Loss: 3.3594769191741944. Time = 2020-02-18 13:40:41.053714
Epoch: 10/10, Progress in Epoch: 10800/13940, Loss: 3.354418889284134. Time = 2020-02-18 13:40:58.857603
Epoch: 10/10, Progress in Epoch: 11200/13940, Loss: 3.4641233384609222. Time = 2020-02-18 13:41:16.508514
Epoch: 10/10, Progress in Epoch: 11600/13940, Loss: 3.3771490609645842. Time = 2020-02-18 13:41:34.122624
Epoch: 10/10, Progress in Epoch: 12000/13940, Loss: 3.3536438608169554. Time = 2020-02-18 13:41:51.763390
Epoch: 10/10, Progress in Epoch: 12400/13940, Loss: 3.38925786703825. Time = 2020-02-18 13:42:09.394109
Epoch: 10/10, Progress in Epoch: 12800/13940, Loss: 3.3545562505722044. Time = 2020-02-18 13:42:27.035500
Epoch: 10/10, Progress in Epoch: 13200/13940, Loss: 3.3701386821269987. Time = 2020-02-18 13:42:44.629973
Epoch: 10/10, Progress in Epoch: 13600/13940, Loss: 3.3596088737249374. Time = 2020-02-18 13:43:02.230739
Epoch: 10/10, Complete. AVRG Loss : 3.3016379591860896.
Model Trained and Saved
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) 第一次训练初始超参数: ```pythonsequence_length = 8 batch_size = 32learning_rate = 0.01vocab_size = len(vocab_to_int)output_size = vocab_sizeembedding_dim = 200hidden_dim = 256n_layers = 3show_every_n_batches = 100```结果:Loss 一直在 9.3 左右震荡,不收敛。 第二次训练更改参数:```pythonlearning_rate = 0.001```结果: Loss:从 9.38 降到 9.16 左右,便不再收敛了。分析: learning_rate 的减小有助于降低 loss,但不明显。说明:* 可以持续降低 learning_rate* loss 不收敛可能还有其他原因,比如模型参数太少,以至于不足以描述此模型。 第三次训练更改函数 `create_lookup_tables`,先对单词进行词频排序,然后在生成 lookup dictionary。更改参数:```pythonsequence_length = 5batch_size = 64learning_rate = 0.0001embedding_dim = 300hidden_dim = 512n_layers = 2```结果: - Epoch Loss:`[9.48, 9.20, 9.143, 9.128, 9.127, 9.115]`分析:- 收敛速度很慢,learning_rate 可能需要提高。 第四次训练更改参数:```pythonlearning_rate = 0.0005```结果: Epoch Loss:有增长。说明:- learning_rate 还需要降低。 第N次训练尝试修改各种参数,Loss 一直在 9.1 以上,下不来。分析:- 应该是模型除了问题,去网上寻找答案。- 在 Udacity 论坛上看到有人在定义 Module 的时候加了 sigmoid 层,导致 Loss 不能降低。我发现我也犯了一样的错误。 第N+1次训练更改:- 定义 Module 时,去掉 sigmoid 层。- 每次 epoch 等待时间太长,影响效率。于是,想先只取 `int_text[:40000]` 来训练,修改代码:`train_loader = batch_data(int_text[:40000], sequence_length, batch_size)`。参数调整好后再改回去。```pythonlearning_rate = 0.001embedding_dim = 200hidden_dim = 512```结果:- 在 20 个 epoch 后,loss 降到了 1.37。分析:- 干的不错,趋势是对的。换上全量数据集,开始训练吧! 第N+2次训练 --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:43: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (30, 50)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 30 to 50:
george: wait a second, wait a second, what coming in, what woman is coming in?
jerry: i told you about laura, the girl i met in michigan?
george: no, you didnt!
jerry: i thought i told you about it, yes, she teaches political science? i met her the night i did the show in lansing...
george: ha.
jerry: (looks in the creamer) theres no milk in here, what...
george: wait wait wait, what is she... (takes the milk can from jerry and puts it on the table) what is she like?
jerry: oh, shes really great. i mean, shes got like a real warmth about her and shes really bright and really pretty and uh... the conversation though, i mean, it was... talking with her is like talking with you, but, you know, obviously much better.
george: (smiling) so, you know, what, what happened?
jerry: oh, nothing happened, you know, but is was great.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from string import punctuation
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
# get rid of punctuation and standardize (from RNN sentiment and word2vec embeddings exercises)
#text = text.lower() # lowercase all capitals
#all_text = ''.join([c for c in text if c not in punctuation])
# split by new lines and spaces
#text_split = all_text.split('\n')
#all_text = ' '.join(text_split)
# create list of words
#words = all_text.split()
# create the dictionaires
counts = Counter(text)
vocab = sorted(counts, key=counts.get, reverse=True)
int_to_vocab = {ii: word for ii, word in enumerate(vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
toke = {
'.': '||Period||',
',': '||Comma||',
'"': '||Quotation_Mark||',
';': '||Semi_Colon||',
'!': '||Exclamation_Mark||',
'?': '||Question_Mark||',
'(': '||Left_Parentheses||',
')': '||Right_Parentheses||',
'-': '||Dash||',
'\n': '||Return||',
}
return toke
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
import torch
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
feature_tensors = []
target_tensors = []
for idx in range(len(words)-1-sequence_length):
feature_tensors.append(words[idx:idx+sequence_length]) # get next sequence_length number of words following index
target_tensors.append(words[idx+sequence_length]) # get immediately following word integer for target
feature_tensors = torch.tensor(feature_tensors)
target_tensors = torch.tensor(target_tensors)
# return a dataloader
data = TensorDataset(feature_tensors, target_tensors)
data_loader = torch.utils.data.DataLoader(data,
batch_size=batch_size,
shuffle=True)
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[28, 29, 30, 31, 32],
[17, 18, 19, 20, 21],
[19, 20, 21, 22, 23],
[39, 40, 41, 42, 43],
[15, 16, 17, 18, 19],
[ 7, 8, 9, 10, 11],
[ 6, 7, 8, 9, 10],
[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[36, 37, 38, 39, 40]])
torch.Size([10])
tensor([33, 22, 24, 44, 20, 12, 11, 5, 10, 41])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
self.embedding_dim = embedding_dim
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
self.fc = nn.Linear(hidden_dim, output_size)
self.dropout = nn.Dropout(0.3)
# initialize embedding tables with uniform distribution
self.embedding.weight.data.uniform_(-1, 1)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0) # get first value of tensor
x = nn_input.long()
embeds = self.embedding(x)
lstm_out, hidden = self.lstm(embeds, hidden)
# get last batch of labels (i.e. the top predictions)
lstm_out = lstm_out[:, -1]
# stack LSTM
out = lstm_out.contiguous().view(-1, self.hidden_dim)
out = self.dropout(out)
out = self.fc(out)
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
clip = 5 # gradient clipping
# move data to GPU, if available
if(train_on_gpu):
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
# create new variable for hidden state so we don't backpropogate over entire history
hidden = tuple([each.data for each in hidden])
# zero accumualated gradients
rnn.zero_grad()
# get output from model
output, hidden = rnn(inp, hidden)
# calculate loss and perform backpropogation
loss = criterion(output.squeeze(), target.long())
loss.backward()
nn.utils.clip_grad_norm_(rnn.parameters(), clip)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 200 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 500
# Hidden Dimension
hidden_dim = 512
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 5.334586154937744
Epoch: 1/10 Loss: 4.797140218257904
Epoch: 1/10 Loss: 4.595639789104462
Epoch: 1/10 Loss: 4.44755672454834
Epoch: 1/10 Loss: 4.42781433057785
Epoch: 1/10 Loss: 4.354476883888244
Epoch: 1/10 Loss: 4.295695515632629
Epoch: 1/10 Loss: 4.262855576515197
Epoch: 1/10 Loss: 4.248539539337158
Epoch: 1/10 Loss: 4.236551188468933
Epoch: 1/10 Loss: 4.213077045440674
Epoch: 1/10 Loss: 4.202364300251007
Epoch: 1/10 Loss: 4.179191162109375
Epoch: 2/10 Loss: 4.076817929498421
Epoch: 2/10 Loss: 4.017686247348785
Epoch: 2/10 Loss: 3.9865696363449095
Epoch: 2/10 Loss: 3.997194850921631
Epoch: 2/10 Loss: 3.984932848930359
Epoch: 2/10 Loss: 3.9910859718322755
Epoch: 2/10 Loss: 4.003655794143676
Epoch: 2/10 Loss: 3.960712371826172
Epoch: 2/10 Loss: 3.9844819622039793
Epoch: 2/10 Loss: 3.9616341958045957
Epoch: 2/10 Loss: 3.9752440967559814
Epoch: 2/10 Loss: 3.963847270488739
Epoch: 2/10 Loss: 3.9581854982376097
Epoch: 3/10 Loss: 3.870664988913812
Epoch: 3/10 Loss: 3.8055054931640626
Epoch: 3/10 Loss: 3.8039415345191956
Epoch: 3/10 Loss: 3.819620738506317
Epoch: 3/10 Loss: 3.826831775665283
Epoch: 3/10 Loss: 3.8198120632171633
Epoch: 3/10 Loss: 3.833869047641754
Epoch: 3/10 Loss: 3.8217194204330442
Epoch: 3/10 Loss: 3.8340025668144224
Epoch: 3/10 Loss: 3.828302626132965
Epoch: 3/10 Loss: 3.8071245694160463
Epoch: 3/10 Loss: 3.8046836643218995
Epoch: 3/10 Loss: 3.8572446103096008
Epoch: 4/10 Loss: 3.753773261446598
Epoch: 4/10 Loss: 3.6804130439758302
Epoch: 4/10 Loss: 3.6787006554603576
Epoch: 4/10 Loss: 3.7163219866752626
Epoch: 4/10 Loss: 3.733499647140503
Epoch: 4/10 Loss: 3.7071649508476257
Epoch: 4/10 Loss: 3.7347512803077696
Epoch: 4/10 Loss: 3.718183452606201
Epoch: 4/10 Loss: 3.7174845147132873
Epoch: 4/10 Loss: 3.721019902229309
Epoch: 4/10 Loss: 3.730444617271423
Epoch: 4/10 Loss: 3.740888171195984
Epoch: 4/10 Loss: 3.7371507019996644
Epoch: 5/10 Loss: 3.6477775886531703
Epoch: 5/10 Loss: 3.5950684504508974
Epoch: 5/10 Loss: 3.6123854398727415
Epoch: 5/10 Loss: 3.624672025203705
Epoch: 5/10 Loss: 3.6450956172943116
Epoch: 5/10 Loss: 3.6297607488632204
Epoch: 5/10 Loss: 3.6236435842514036
Epoch: 5/10 Loss: 3.6514062147140502
Epoch: 5/10 Loss: 3.63631249332428
Epoch: 5/10 Loss: 3.6581852040290834
Epoch: 5/10 Loss: 3.652893509864807
Epoch: 5/10 Loss: 3.6630671105384827
Epoch: 5/10 Loss: 3.6642883038520813
Epoch: 6/10 Loss: 3.589667251287413
Epoch: 6/10 Loss: 3.5291729593276977
Epoch: 6/10 Loss: 3.5271378355026246
Epoch: 6/10 Loss: 3.5441640973091126
Epoch: 6/10 Loss: 3.5441652855873107
Epoch: 6/10 Loss: 3.5695030293464662
Epoch: 6/10 Loss: 3.5788466606140137
Epoch: 6/10 Loss: 3.566995312690735
Epoch: 6/10 Loss: 3.5882584991455078
Epoch: 6/10 Loss: 3.592465175151825
Epoch: 6/10 Loss: 3.6163834280967713
Epoch: 6/10 Loss: 3.6108879370689393
Epoch: 6/10 Loss: 3.613585807800293
Epoch: 7/10 Loss: 3.537235260748666
Epoch: 7/10 Loss: 3.4488248257637024
Epoch: 7/10 Loss: 3.4847021589279175
Epoch: 7/10 Loss: 3.4764876284599304
Epoch: 7/10 Loss: 3.4782873797416687
Epoch: 7/10 Loss: 3.519772076129913
Epoch: 7/10 Loss: 3.5255536546707154
Epoch: 7/10 Loss: 3.536333700180054
Epoch: 7/10 Loss: 3.5371217584609984
Epoch: 7/10 Loss: 3.537506432056427
Epoch: 7/10 Loss: 3.5342215604782106
Epoch: 7/10 Loss: 3.5489294104576112
Epoch: 7/10 Loss: 3.5646995477676393
Epoch: 8/10 Loss: 3.4739869133500028
Epoch: 8/10 Loss: 3.411940787315369
Epoch: 8/10 Loss: 3.432815299987793
Epoch: 8/10 Loss: 3.4482377791404724
Epoch: 8/10 Loss: 3.461714812755585
Epoch: 8/10 Loss: 3.4574203820228577
Epoch: 8/10 Loss: 3.4470387706756593
Epoch: 8/10 Loss: 3.4905773940086364
Epoch: 8/10 Loss: 3.478639932155609
Epoch: 8/10 Loss: 3.4813049745559694
Epoch: 8/10 Loss: 3.484394829273224
Epoch: 8/10 Loss: 3.516914571762085
Epoch: 8/10 Loss: 3.5124652733802795
Epoch: 9/10 Loss: 3.4374983108733312
Epoch: 9/10 Loss: 3.3787284741401673
Epoch: 9/10 Loss: 3.3779114418029783
Epoch: 9/10 Loss: 3.3890338644981384
Epoch: 9/10 Loss: 3.4027763972282408
Epoch: 9/10 Loss: 3.4187504153251647
Epoch: 9/10 Loss: 3.429298345088959
Epoch: 9/10 Loss: 3.436389921665192
Epoch: 9/10 Loss: 3.4261458411216736
Epoch: 9/10 Loss: 3.441896110534668
Epoch: 9/10 Loss: 3.466687527179718
Epoch: 9/10 Loss: 3.4494317874908447
Epoch: 9/10 Loss: 3.4809471225738524
Epoch: 10/10 Loss: 3.3794581399968835
Epoch: 10/10 Loss: 3.325902421951294
Epoch: 10/10 Loss: 3.3176732869148253
Epoch: 10/10 Loss: 3.3336499962806703
Epoch: 10/10 Loss: 3.374622347831726
Epoch: 10/10 Loss: 3.4176028409004213
Epoch: 10/10 Loss: 3.3704512639045716
Epoch: 10/10 Loss: 3.3957737865447997
Epoch: 10/10 Loss: 3.398438769817352
Epoch: 10/10 Loss: 3.401325791835785
Epoch: 10/10 Loss: 3.421415294647217
Epoch: 10/10 Loss: 3.425583827972412
Epoch: 10/10 Loss: 3.430330708026886
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:**I actually made a pretty good first initialization. I assumed the model would train relatively well as a smaller net given the amount of data was pretty limited.I initially went with a sequence length of 200 and batch size of 32. I generally use 32 for my batch size for most of my other models as a starting point. However, training didn't progress at the pace I wanted so I increased the batch size to 128 to see if that would help. (It did).I initially trained for 5 epochs but didn't get great accuracy, so I increased that to 10. While it wasn't super great, it did result in a loss of less than the 3.5 as requested by this project.My learning rate may be a bit high, as the model quickly converged after about 2-3k steps within each epoch, so I could have changed that but kept it at 0.001.Now for embedding dim, I looked at several of the previous assignments we worked on and decided that 500 was near the upper range for most of our previous work, and didn't really want to spend a long time optimizing my hyperparameters, so I decided to use it.The number of hidden dimensions was 512 based on previous assignments as well. I thought about calculating the number of parameters and comparing them to the size of the data that was suggested in one of the earlier lessons, but the model trained sufficiently well and I didn't need to change it.I selected two LSTMs based on literature that said 2 was a good choice for complex problems and 3 provided mixed results. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'kramer' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
kramer: and then the yankees are the only way to get the money.
elaine: i think it's just...
jerry: oh my god!(kramer enters.)
jerry: hey, you want to go out with a little bit, you can get the hell outta here.
george: i don't know.
jerry: no, i'm gonna get a call.
george: i don't want it.
jerry: you don't understand. i don't want you to be a little bit, but i got a little nervous with that.
george: you know what? i mean, i think i can get my car, and i want to know, and i think i can get my own. i mean, i don't know what i think.
jerry: well, you can't believe this.
elaine:(to jerry, george) hey, you know, you don't have to talk to him.
jerry: oh, i think i got a little problem with this.
elaine: oh.(he exits)
jerry:(on phone) oh, hi.(listens)
george:(to kramer) hey, you want to get back to my house?
george: yeah. i think i was in the hospital. you know, i was hoping i was a lot of.(jerry nods)
jerry: oh, i know....
jerry: i was thinking i didn't get a job interview. i was hoping i got to see her.
george: i mean, i mean, what about the show?
jerry: well, i don't know. but you don't think she wants me to get a little more than a little.
george: yeah, yeah, but i'm sorry. i just don't know what to do.
jerry: you can't do this.
george: you want to go.
jerry: no.
elaine: i mean, i don't know if i'm not gonna do it for you.
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
word_counts = Counter(text)
# sorting the words from most to least frequent in text occurrence
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
# create int_to_vocab dictionaries
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
punctuation_dict = {
"." : "||Period||",
",":"||Comma||",
'"':"||QuotationMark||",
";": "||Semicolon||",
"!":"||Exclamationmark||",
"?":"||Questionmark||",
"(":"||LeftParentheses||",
")":"||RightParentheses||",
"-":"||Dash||",
"\n":"||Return||"
}
return punctuation_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
No GPU found. Please use a GPU to train your neural network.
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# partial_feature = []
# features = []
# targets = []
# count = 0
# for i in range(0,len(words)):
# if count == sequence_length: # adding plus one as the last value is to included into feature
# features.append(partial_feature)
# partial_feature = []
# count=0
# targets.append(words[i]) # here i value is already next value so saving this value as target
# partial_feature.append(words[i])
# count+=1
# train_dataset = TensorDataset(torch.from_numpy(np.array(features)),torch.from_numpy(np.array(targets)))
# train_loader = DataLoader(train_dataset, shuffle=True, batch_size=batch_size)
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
y_len = len(words) - sequence_length
x, y = [], []
for idx in range(0, y_len):
idx_end = sequence_length + idx
x_batch = words[idx:idx_end]
x.append(x_batch)
# print("feature: ",x_batch)
batch_y = words[idx_end]
# print("target: ", batch_y)
y.append(batch_y)
# create Tensor datasets
data = TensorDataset(torch.from_numpy(np.asarray(x)), torch.from_numpy(np.asarray(y)))
# make sure the SHUFFLE your training data
data_loader = DataLoader(data, shuffle=False, batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]], dtype=torch.int32)
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], dtype=torch.int32)
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.vocab_size = vocab_size
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.n_layers = n_layers
# define model layers
self.embed = nn.Embedding(num_embeddings=self.vocab_size, embedding_dim=self.embedding_dim)
self.lstm = nn.LSTM(input_size=self.embedding_dim,hidden_size=self.hidden_dim, num_layers=self.n_layers,batch_first=True,dropout=dropout)
self.dropout = nn.Dropout(0.3)
self.fc1 = nn.Linear(self.hidden_dim,self.output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
embed = self.embed(nn_input)
nn_input, hidden = self.lstm(embed,hidden)
output = nn_input.contiguous().view(-1, self.hidden_dim)
output = self.fc1(output)
output = output.view(batch_size,-1,self.output_size)
output = output[:,-1]
# return one batch of output word scores and the hidden state
return output, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
_____no_output_____
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move model to GPU, if available
if(train_on_gpu):
rnn.cuda()
# # Creating new variables for the hidden state, otherwise
# # we'd backprop through the entire training history
h = tuple([each.data for each in hidden])
# zero accumulated gradients
rnn.zero_grad()
if(train_on_gpu):
inp, target = inp.cuda(), target.cuda()
# get predicted outputs
output, h = rnn(inp, h)
# calculate loss
loss = criterion(output, target)
# optimizer.zero_grad()
loss.backward()
# 'clip_grad_norm' helps prevent the exploding gradient problem in RNNs / LSTMs
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
return loss.item(), h
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
_____no_output_____
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 250
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
_____no_output_____
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
_____no_output_____
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
print(lines[0])
import problem_unittests as tests
from collections import Counter, defaultdict
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_counter = Counter(text)
vocab_to_int = defaultdict(int)
int_to_vocab = defaultdict(str)
for i, word_count in enumerate(word_counter.most_common()):
word = word_count[0]
vocab_to_int[word] = i+1
int_to_vocab[i+1] = word
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
punctuations = ['.', ',', '"', ';', '!', '?', '(', ')', '-', '\n']
tokens = ['||period||',
'||comma||',
'||quotation_mark||',
'||semicolon||',
'||exclamation_mark||',
'||question_mark||',
'||left_parantheses||',
'||right_parantheses||',
'||dash||',
'||return||']
punctuation_to_token = defaultdict(str)
for i, item in enumerate(punctuations):
punctuation_to_token[item] = tokens[i]
return punctuation_to_token
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
features = []
targets = []
#Iterate over the words, moving one word at a time.
#Only iterate until len(words)-sequence_length because of out of range when iterating further than this. Target
#would be out of range.....
for ibatch in range(0, len(words) - sequence_length, 1):
features.append(words[ibatch:ibatch+sequence_length])
targets.append(words[ibatch+sequence_length])
features = torch.tensor(features)
targets = torch.tensor(targets)
#print(targets)
dataset = TensorDataset(features, targets)
data_loader = DataLoader(dataset, batch_size=batch_size, shuffle=True)
#print(features[0:10])
#print(targets[0:10])
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
print(t_loader.dataset.tensors)
###Output
(tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13],
[ 10, 11, 12, 13, 14],
[ 11, 12, 13, 14, 15],
[ 12, 13, 14, 15, 16],
[ 13, 14, 15, 16, 17],
[ 14, 15, 16, 17, 18],
[ 15, 16, 17, 18, 19],
[ 16, 17, 18, 19, 20],
[ 17, 18, 19, 20, 21],
[ 18, 19, 20, 21, 22],
[ 19, 20, 21, 22, 23],
[ 20, 21, 22, 23, 24],
[ 21, 22, 23, 24, 25],
[ 22, 23, 24, 25, 26],
[ 23, 24, 25, 26, 27],
[ 24, 25, 26, 27, 28],
[ 25, 26, 27, 28, 29],
[ 26, 27, 28, 29, 30],
[ 27, 28, 29, 30, 31],
[ 28, 29, 30, 31, 32],
[ 29, 30, 31, 32, 33],
[ 30, 31, 32, 33, 34],
[ 31, 32, 33, 34, 35],
[ 32, 33, 34, 35, 36],
[ 33, 34, 35, 36, 37],
[ 34, 35, 36, 37, 38],
[ 35, 36, 37, 38, 39],
[ 36, 37, 38, 39, 40],
[ 37, 38, 39, 40, 41],
[ 38, 39, 40, 41, 42],
[ 39, 40, 41, 42, 43],
[ 40, 41, 42, 43, 44],
[ 41, 42, 43, 44, 45],
[ 42, 43, 44, 45, 46],
[ 43, 44, 45, 46, 47],
[ 44, 45, 46, 47, 48]]), tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28,
29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,
41, 42, 43, 44, 45, 46, 47, 48, 49]))
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 30, 31, 32, 33, 34],
[ 26, 27, 28, 29, 30],
[ 35, 36, 37, 38, 39],
[ 20, 21, 22, 23, 24],
[ 33, 34, 35, 36, 37],
[ 28, 29, 30, 31, 32],
[ 39, 40, 41, 42, 43],
[ 31, 32, 33, 34, 35],
[ 19, 20, 21, 22, 23],
[ 44, 45, 46, 47, 48]])
torch.Size([10])
tensor([ 35, 31, 40, 25, 38, 33, 44, 36, 24, 49])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of words** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output one, next word.
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
#self.hidden = self.init_hidden()
# set class variables
self.vocab_size = vocab_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
self.output_size = output_size
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.LSTM = nn.LSTM(embedding_dim,
hidden_dim,
n_layers,
dropout=dropout,
batch_first=True)
#Map LSTM output to predictions
self.fc = nn.Linear(hidden_dim, output_size)
#self.Dropout = nn.Dropout(0.2)
#self.sigmoid = nn.Sigmoid()
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
#print("batch size : {}" .format(batch_size))
#Get word embeddings
embeddings = self.embedding(nn_input)
#print(embeddings.size())
#get lstm outputs
lstm_out, hidden_state = self.LSTM(embeddings, hidden)
#print("Shape before stacking lstm: {}" .format(lstm_out.contiguous().size()))
#Reorder/stack the outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
#print("Shape after stacking lstm: {}" .format(lstm_out.size()))
out = self.fc(lstm_out)
#Would include this if i had more time to train
#out = self.Dropout(out)
#print("output size before reshaped to batch size first: {}" .format(out.size()))
#Reorder to batch_size, seq_length, output_size
out = out.view(batch_size, -1, self.output_size)
#print("output size after reshaped to batch size first: {}" .format(out.size()))
#print(out.size())
prediction = out[:, -1] # get last batch of labels
# print(prediction.size())
# return one batch of output word scores and the hidden state
return prediction, hidden_state
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
#Know that I'm training on GPU
if (train_on_gpu):
hidden_state = (weight.new(self.n_layers,batch_size,self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
print("Why aren't you using GPU?")
return hidden_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
_____no_output_____
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
from torch.nn.utils import clip_grad_norm_
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
if train_on_gpu:
inp = inp.cuda()
target = target.cuda()
hidden= [each.data.cuda() for each in hidden]
#print(hidden)
optimizer.zero_grad()
output, hidden_state = rnn(inp, hidden)
#print(output)
loss = criterion(output, target)
loss.backward()
clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# TODO: Implement Function
# move data to GPU, if available
# perform backpropagation and optimization
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden_state
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
#print(labels)
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class. Trying to find the average sequence length in the script
###Code
import re
line_lengths = []
#Half of the lines are empty (odd numbers are just blank spaces)
for i, line in enumerate(lines):
#Check that we are on an even number (contains text)
if not i%2:
line_lengths.append(len(line.split()))
print("average number of words per line in seinfeld: {}" .format(int(np.mean(line_lengths))))
###Output
average number of words per line in seinfeld: 11
###Markdown
The average number of words per line is used as the sequence length
###Code
# Data params
# Sequence Length
sequence_length = 11 # of words in a sequence
# Batch Size
batch_size = 256
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 20
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 512
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn. Just copied the content from workspace_utils to enable training over time
###Code
import signal
from contextlib import contextmanager
import requests
DELAY = INTERVAL = 4 * 60 # interval time in seconds
MIN_DELAY = MIN_INTERVAL = 2 * 60
KEEPALIVE_URL = "https://nebula.udacity.com/api/v1/remote/keep-alive"
TOKEN_URL = "http://metadata.google.internal/computeMetadata/v1/instance/attributes/keep_alive_token"
TOKEN_HEADERS = {"Metadata-Flavor":"Google"}
def _request_handler(headers):
def _handler(signum, frame):
requests.request("POST", KEEPALIVE_URL, headers=headers)
return _handler
@contextmanager
def active_session(delay=DELAY, interval=INTERVAL):
"""
Example:
from workspace_utils import active session
with active_session():
# do long-running work here
"""
token = requests.request("GET", TOKEN_URL, headers=TOKEN_HEADERS).text
headers = {'Authorization': "STAR " + token}
delay = max(delay, MIN_DELAY)
interval = max(interval, MIN_INTERVAL)
original_handler = signal.getsignal(signal.SIGALRM)
try:
signal.signal(signal.SIGALRM, _request_handler(headers))
signal.setitimer(signal.ITIMER_REAL, delay, interval)
yield
finally:
signal.signal(signal.SIGALRM, original_handler)
signal.setitimer(signal.ITIMER_REAL, 0)
def keep_awake(iterable, delay=DELAY, interval=INTERVAL):
"""
Example:
from workspace_utils import keep_awake
for i in keep_awake(range(5)):
# do iteration with lots of work here
"""
with active_session(delay, interval): yield from iterable
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
for i in keep_awake(range(1)):
# do iteration with lots of work here
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 20 epoch(s)...
Epoch: 1/20 Loss: 5.3030669279098515
Epoch: 1/20 Loss: 4.620214327812195
Epoch: 1/20 Loss: 4.433250930786133
Epoch: 1/20 Loss: 4.293495012760163
Epoch: 1/20 Loss: 4.230082350254059
Epoch: 1/20 Loss: 4.168107018947602
Epoch: 2/20 Loss: 4.0285630340014045
Epoch: 2/20 Loss: 3.9249883275032045
Epoch: 2/20 Loss: 3.9215549778938295
Epoch: 2/20 Loss: 3.8988177995681763
Epoch: 2/20 Loss: 3.8650728163719177
Epoch: 2/20 Loss: 3.8652200956344602
Epoch: 3/20 Loss: 3.7584196871858304
Epoch: 3/20 Loss: 3.6690328950881956
Epoch: 3/20 Loss: 3.677587031841278
Epoch: 3/20 Loss: 3.6664019713401794
Epoch: 3/20 Loss: 3.6574306178092955
Epoch: 3/20 Loss: 3.6558912591934205
Epoch: 4/20 Loss: 3.5670045902573966
Epoch: 4/20 Loss: 3.4957964677810667
Epoch: 4/20 Loss: 3.486451469421387
Epoch: 4/20 Loss: 3.49300568151474
Epoch: 4/20 Loss: 3.5131288185119627
Epoch: 4/20 Loss: 3.5089879660606385
Epoch: 5/20 Loss: 3.412592833119679
Epoch: 5/20 Loss: 3.3397996444702147
Epoch: 5/20 Loss: 3.361472554206848
Epoch: 5/20 Loss: 3.3689837012290953
Epoch: 5/20 Loss: 3.369387209892273
Epoch: 5/20 Loss: 3.392926040649414
Epoch: 6/20 Loss: 3.2918449279254043
Epoch: 6/20 Loss: 3.23577649974823
Epoch: 6/20 Loss: 3.2408394708633423
Epoch: 6/20 Loss: 3.242942500591278
Epoch: 6/20 Loss: 3.264701265335083
Epoch: 6/20 Loss: 3.2813189492225647
Epoch: 7/20 Loss: 3.1913416392919496
Epoch: 7/20 Loss: 3.1375364003181456
Epoch: 7/20 Loss: 3.1472790060043336
Epoch: 7/20 Loss: 3.159783121585846
Epoch: 7/20 Loss: 3.1825279612541197
Epoch: 7/20 Loss: 3.182379403591156
Epoch: 8/20 Loss: 3.104790642736404
Epoch: 8/20 Loss: 3.050485185623169
Epoch: 8/20 Loss: 3.07590904378891
Epoch: 8/20 Loss: 3.091768286705017
Epoch: 8/20 Loss: 3.0943512878417967
Epoch: 8/20 Loss: 3.1168371138572692
Epoch: 9/20 Loss: 3.0355744698667912
Epoch: 9/20 Loss: 2.9791538524627685
Epoch: 9/20 Loss: 2.988285686969757
Epoch: 9/20 Loss: 3.0176438517570494
Epoch: 9/20 Loss: 3.0452546997070313
Epoch: 9/20 Loss: 3.066123980998993
Epoch: 10/20 Loss: 2.977760581708536
Epoch: 10/20 Loss: 2.9160740399360656
Epoch: 10/20 Loss: 2.9462120776176453
Epoch: 10/20 Loss: 2.9596316838264465
Epoch: 10/20 Loss: 2.980649769306183
Epoch: 10/20 Loss: 3.003009355545044
Epoch: 11/20 Loss: 2.923106189908051
Epoch: 11/20 Loss: 2.862457206726074
Epoch: 11/20 Loss: 2.9037794451713563
Epoch: 11/20 Loss: 2.917519901752472
Epoch: 11/20 Loss: 2.927754180431366
Epoch: 11/20 Loss: 2.9438069467544556
Epoch: 12/20 Loss: 2.8753611903365064
Epoch: 12/20 Loss: 2.820076726436615
Epoch: 12/20 Loss: 2.8434745416641234
Epoch: 12/20 Loss: 2.867061561584473
Epoch: 12/20 Loss: 2.8823512825965882
Epoch: 12/20 Loss: 2.9043544535636903
Epoch: 13/20 Loss: 2.831178372710701
Epoch: 13/20 Loss: 2.788753336429596
Epoch: 13/20 Loss: 2.811571711063385
Epoch: 13/20 Loss: 2.8314144930839538
Epoch: 13/20 Loss: 2.848972408294678
Epoch: 13/20 Loss: 2.8588183851242066
Epoch: 14/20 Loss: 2.79452690821353
Epoch: 14/20 Loss: 2.7534012842178344
Epoch: 14/20 Loss: 2.769148416042328
Epoch: 14/20 Loss: 2.7953113560676575
Epoch: 14/20 Loss: 2.8087381014823913
Epoch: 14/20 Loss: 2.8223458380699156
Epoch: 15/20 Loss: 2.7667517514248203
Epoch: 15/20 Loss: 2.717128375530243
Epoch: 15/20 Loss: 2.7278556532859803
Epoch: 15/20 Loss: 2.750905842781067
Epoch: 15/20 Loss: 2.772277947425842
Epoch: 15/20 Loss: 2.7999129528999327
Epoch: 16/20 Loss: 2.7273548015249456
Epoch: 16/20 Loss: 2.672714412689209
Epoch: 16/20 Loss: 2.7014059419631957
Epoch: 16/20 Loss: 2.7294003405570986
Epoch: 16/20 Loss: 2.754824188232422
Epoch: 16/20 Loss: 2.768513523578644
Epoch: 17/20 Loss: 2.696716475292919
Epoch: 17/20 Loss: 2.6618661236763
Epoch: 17/20 Loss: 2.6799789052009584
Epoch: 17/20 Loss: 2.694623122692108
Epoch: 17/20 Loss: 2.7209681930541993
Epoch: 17/20 Loss: 2.728618656158447
Epoch: 18/20 Loss: 2.672928665227037
Epoch: 18/20 Loss: 2.6243604826927185
Epoch: 18/20 Loss: 2.65350098657608
Epoch: 18/20 Loss: 2.6734824132919313
Epoch: 18/20 Loss: 2.7050926628112792
Epoch: 18/20 Loss: 2.702825466632843
Epoch: 19/20 Loss: 2.6487108401166712
Epoch: 19/20 Loss: 2.595060845851898
Epoch: 19/20 Loss: 2.6304976534843445
Epoch: 19/20 Loss: 2.6442951736450193
Epoch: 19/20 Loss: 2.668322277069092
Epoch: 19/20 Loss: 2.6911623067855834
Epoch: 20/20 Loss: 2.6228444375158326
Epoch: 20/20 Loss: 2.5815917649269102
Epoch: 20/20 Loss: 2.599704447746277
Epoch: 20/20 Loss: 2.627982174873352
Epoch: 20/20 Loss: 2.64976882314682
Epoch: 20/20 Loss: 2.6682947492599487
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** From a previous task in this course it was highlighted that having a large batch size is optimal in theory, but small batch sizes often provide good results in practice. A batch size of 32 was therefore used as a starting point, increasing with a power of 2 to better the results. The best results was gained with a batch size of 256.The starting number of elements in a sequence was chosen by calculating the average number of words for each line. Learning rate started on 0.01 and was thought to be decreased if no improvement in loss was shown or if the loss bumped too much back and forth. It was shown to be better with lr = 0.001I had read somewhere that 256 and 512 were normal values to start out with for the hidden_dimension, i started with 512 and was gonna tweak this with a power of 2 if the network couldn't converge. I also read that there rarely is any gain in going over 2-3 hidden layers rarely improved anything, therefore i started with n_layers = 2.One thing i noticed was that i first got very bad results with sigmoid, removing this drastically improved the results. If you have a good explanation for this, i would be greatful for some input!My thought is that the "squashing" of values using the sigmoid function might be counter-productive when there are so many possible output values. I was quite unsure about the embedding dimension, but thought that experimenting with different values (100, 150,200,....) would give a decent indication. An embedding dimension of 200 showed great results(Write answer, here) --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat it's predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval() # eval mode
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the index of the most likely next word
top_i = torch.multinomial(output.exp().data, 1).item()
# retrieve that word from the dictionary
word = int_to_vocab[top_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = top_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!) Cuda is making me struggle pretty bad, training went fine and i got good results. The testing however won''t let me through.
###Code
# run the cell multiple times to get different results!
gen_length = 20 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:59: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
# https://github.com/taimurzahid/Deep-Learning-Nanodegree/blob/master/sentiment-rnn/Sentiment_RNN_Exercise_mine.ipynb
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# DONE: Implement Function
## Build a dictionary that maps words to integers
counts = Counter(text)
vocab = sorted(counts, key=counts.get, reverse=True)
int_to_vocab = {ii: word for ii, word in enumerate(vocab, 1)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return {'.' : '||period||',
',' : '||comma||',
'\"' : '||quotation_mark||',
';' : '||semicolon||',
'!' : '||exclamationmark||',
'?' : '||questionmark||',
'(' : '||leftparentheses||',
')' : '||rightparentheses||',
'-' : '||dash||',
'\n' : '||return||'}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
else:
print('Training on GPU')
###Output
Training on GPU
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
import math
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
#data = TensorDataset(feature_tensors, target_tensors)
#data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)
words_truncated = words[:-sequence_length]
#words_truncated = words_len - (batch_size * sequence_length)
#words_truncated = words_len - (batch_size * total_batches)
#upper_limit = total_batches * batch_size
#total_batches = batch_size * sequence_length
total_batches = len(words_truncated) // batch_size
print('Words Lenght: ' + str(len(words)))
print('Sequence Lenght: ' + str(sequence_length))
print('Batch Size: ' + str(batch_size))
print('Total Batches: ' + str(total_batches))
print('Words Truncated: ' + str(words_truncated))
#words = words[:words_len - words_truncated]
#words = words[:upper_limit]
#words_len = len(words)
#print('New Words Lenght: ' + str(len(words)))
features = []
targets = []
for i in range(0, len(words_truncated)):
last = i + sequence_length
#print('Feature Tensor: ' + str(words[i:last]))
#print('Target Tensor: ' + str(words[last]))
feature = words[i:last]
target = words[last]
features.append(feature)
targets.append(target)
features = features[:total_batches*batch_size]
targets = targets[:total_batches*batch_size]
data = TensorDataset(torch.from_numpy(np.array(features)), torch.from_numpy(np.array(targets)))
data_loader = torch.utils.data.DataLoader(data, shuffle=True, batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
batch_data(int_text[:53], 5, 10)
###Output
Words Lenght: 53
Sequence Lenght: 5
Batch Size: 10
Total Batches: 4
Words Truncated: [25, 23, 48, 2, 2, 2, 18, 48, 23, 83, 21, 7, 1253, 546, 8783, 7190, 21, 242, 2, 150, 2, 2, 2, 85, 5, 201, 239, 150, 209, 59, 56, 136, 65, 48, 4, 25, 23, 19, 678, 209, 59, 2, 2, 2, 25, 221, 127, 3]
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
Words Lenght: 50
Sequence Lenght: 5
Batch Size: 10
Total Batches: 4
Words Truncated: range(0, 45)
torch.Size([10, 5])
tensor([[ 11, 12, 13, 14, 15],
[ 19, 20, 21, 22, 23],
[ 30, 31, 32, 33, 34],
[ 38, 39, 40, 41, 42],
[ 16, 17, 18, 19, 20],
[ 36, 37, 38, 39, 40],
[ 32, 33, 34, 35, 36],
[ 24, 25, 26, 27, 28],
[ 25, 26, 27, 28, 29],
[ 35, 36, 37, 38, 39]])
torch.Size([10])
tensor([ 16, 24, 35, 43, 21, 41, 37, 29, 30, 40])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
import torch.optim as optim
# https://github.com/taimurzahid/Deep-Learning-Nanodegree/blob/master/sentiment-rnn/Sentiment_RNN_Exercise_mine.ipynb
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.vocab_size = vocab_size
self.n_layers = n_layers
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.dropout = dropout
# define model layers
# embedding and LSTM layers
self.embedding = nn.Embedding(self.vocab_size, self.embedding_dim)
self.lstm = nn.LSTM(self.embedding_dim, self.hidden_dim, self.n_layers,
dropout=self.dropout, batch_first=True)
# dropout layer
#self.dropout = nn.Dropout(0.3)
# linear layer
self.fc = nn.Linear(self.hidden_dim, self.output_size)
#self.sig = nn.Sigmoid()
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
# embeddings and lstm_out
embeds = self.embedding(nn_input.long())
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
#out = self.dropout(lstm_out)
out = self.fc(lstm_out)
#sigmoid function
#out = self.sig(out)
# reshape to be batch_size first
out = out.view(batch_size, -1, self.output_size)
out = out[:, -1] # get last batch of labels
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
# Create two new tensors with sizes n_layers x batch_size x hidden_dim,
# initialized to zero, for hidden state and cell state of LSTM
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
# https://github.com/taimurzahid/Deep-Learning-Nanodegree/blob/master/sentiment-rnn/Sentiment_RNN_Exercise_mine.ipynb
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if train_on_gpu:
#rnn.cuda()
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
h = tuple([each.data for each in hidden])
# zero accumulated gradients
rnn.zero_grad()
# get the output from the model
output, h = rnn(inp, h)
# calculate the loss and perform backprop
loss = criterion(output.squeeze(), target.long())
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
clip=5 # gradient clipping
nn.utils.clip_grad_norm_(rnn.parameters(), clip)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), h
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 16 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int) + 1 # 1 is added based on Slack discussion
# Output size
output_size = len(vocab_to_int) + 1 # 1 is added based on Slack discussion
# Embedding Dimension
embedding_dim = 400
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 5.471050339698792
Epoch: 1/10 Loss: 4.83803058385849
Epoch: 1/10 Loss: 4.610782274723053
Epoch: 1/10 Loss: 4.520275905609131
Epoch: 1/10 Loss: 4.413108244895935
Epoch: 1/10 Loss: 4.36340319442749
Epoch: 1/10 Loss: 4.334854992866516
Epoch: 1/10 Loss: 4.273975263118744
Epoch: 1/10 Loss: 4.2341670255661015
Epoch: 1/10 Loss: 4.2165902523994445
Epoch: 1/10 Loss: 4.1803207430839535
Epoch: 1/10 Loss: 4.165702053070069
Epoch: 1/10 Loss: 4.159639587402344
Epoch: 2/10 Loss: 4.045049037726671
Epoch: 2/10 Loss: 3.9613006258010866
Epoch: 2/10 Loss: 3.938518739700317
Epoch: 2/10 Loss: 3.9416296734809877
Epoch: 2/10 Loss: 3.931606788635254
Epoch: 2/10 Loss: 3.934953164100647
Epoch: 2/10 Loss: 3.9185888533592226
Epoch: 2/10 Loss: 3.9147161779403685
Epoch: 2/10 Loss: 3.91242454957962
Epoch: 2/10 Loss: 3.93124987745285
Epoch: 2/10 Loss: 3.8877110123634337
Epoch: 2/10 Loss: 3.8978343958854675
Epoch: 2/10 Loss: 3.8805348863601683
Epoch: 3/10 Loss: 3.7956401014967724
Epoch: 3/10 Loss: 3.722789454936981
Epoch: 3/10 Loss: 3.7303161554336546
Epoch: 3/10 Loss: 3.74334672164917
Epoch: 3/10 Loss: 3.7583466691970826
Epoch: 3/10 Loss: 3.735187967300415
Epoch: 3/10 Loss: 3.749377622127533
Epoch: 3/10 Loss: 3.728091073036194
Epoch: 3/10 Loss: 3.757435890674591
Epoch: 3/10 Loss: 3.732640972614288
Epoch: 3/10 Loss: 3.767749113082886
Epoch: 3/10 Loss: 3.7501944437026977
Epoch: 3/10 Loss: 3.7486029677391053
Epoch: 4/10 Loss: 3.67139998456642
Epoch: 4/10 Loss: 3.5945886602401735
Epoch: 4/10 Loss: 3.6112404036521912
Epoch: 4/10 Loss: 3.6088072242736815
Epoch: 4/10 Loss: 3.6137901096343996
Epoch: 4/10 Loss: 3.6236182446479797
Epoch: 4/10 Loss: 3.6320138945579528
Epoch: 4/10 Loss: 3.617330150604248
Epoch: 4/10 Loss: 3.616030725479126
Epoch: 4/10 Loss: 3.656285080909729
Epoch: 4/10 Loss: 3.6391136193275453
Epoch: 4/10 Loss: 3.645322675704956
Epoch: 4/10 Loss: 3.662479877471924
Epoch: 5/10 Loss: 3.582206561961533
Epoch: 5/10 Loss: 3.509097593784332
Epoch: 5/10 Loss: 3.5116215534210204
Epoch: 5/10 Loss: 3.5309250173568727
Epoch: 5/10 Loss: 3.5161764755249023
Epoch: 5/10 Loss: 3.531072009563446
Epoch: 5/10 Loss: 3.554388628959656
Epoch: 5/10 Loss: 3.550441417694092
Epoch: 5/10 Loss: 3.542397045612335
Epoch: 5/10 Loss: 3.544317928314209
Epoch: 5/10 Loss: 3.577099612236023
Epoch: 5/10 Loss: 3.587551784515381
Epoch: 5/10 Loss: 3.590496362686157
Epoch: 6/10 Loss: 3.5002912805791486
Epoch: 6/10 Loss: 3.442498236656189
Epoch: 6/10 Loss: 3.430368718624115
Epoch: 6/10 Loss: 3.450769548892975
Epoch: 6/10 Loss: 3.4606251792907714
Epoch: 6/10 Loss: 3.4698970947265626
Epoch: 6/10 Loss: 3.480668309688568
Epoch: 6/10 Loss: 3.4796881718635557
Epoch: 6/10 Loss: 3.5031061816215514
Epoch: 6/10 Loss: 3.4850419368743895
Epoch: 6/10 Loss: 3.517740294933319
Epoch: 6/10 Loss: 3.513252761363983
Epoch: 6/10 Loss: 3.518762966632843
Epoch: 7/10 Loss: 3.4310088472592697
Epoch: 7/10 Loss: 3.364038896560669
Epoch: 7/10 Loss: 3.3791239199638365
Epoch: 7/10 Loss: 3.3790149116516113
Epoch: 7/10 Loss: 3.3892043924331663
Epoch: 7/10 Loss: 3.412972182273865
Epoch: 7/10 Loss: 3.4104024200439453
Epoch: 7/10 Loss: 3.4254494071006776
Epoch: 7/10 Loss: 3.451978789329529
Epoch: 7/10 Loss: 3.463737847805023
Epoch: 7/10 Loss: 3.4808978543281555
Epoch: 7/10 Loss: 3.454183559894562
Epoch: 7/10 Loss: 3.4894270219802856
Epoch: 8/10 Loss: 3.3934042680128194
Epoch: 8/10 Loss: 3.3336652588844298
Epoch: 8/10 Loss: 3.3350847973823545
Epoch: 8/10 Loss: 3.3370099563598634
Epoch: 8/10 Loss: 3.361009126186371
Epoch: 8/10 Loss: 3.37625600194931
Epoch: 8/10 Loss: 3.3666697998046873
Epoch: 8/10 Loss: 3.3862027969360353
Epoch: 8/10 Loss: 3.4105325055122377
Epoch: 8/10 Loss: 3.390837902545929
Epoch: 8/10 Loss: 3.4133527789115905
Epoch: 8/10 Loss: 3.4063360571861265
Epoch: 8/10 Loss: 3.433782564163208
Epoch: 9/10 Loss: 3.36070151176492
Epoch: 9/10 Loss: 3.2830175766944887
Epoch: 9/10 Loss: 3.29934467792511
Epoch: 9/10 Loss: 3.298807931423187
Epoch: 9/10 Loss: 3.3278138399124146
Epoch: 9/10 Loss: 3.330359736442566
Epoch: 9/10 Loss: 3.3381458597183227
Epoch: 9/10 Loss: 3.3494182267189028
Epoch: 9/10 Loss: 3.348780823230743
Epoch: 9/10 Loss: 3.345073464870453
Epoch: 9/10 Loss: 3.3820195574760437
Epoch: 9/10 Loss: 3.382025594711304
Epoch: 9/10 Loss: 3.402317970752716
Epoch: 10/10 Loss: 3.323510438911194
Epoch: 10/10 Loss: 3.2669842920303345
Epoch: 10/10 Loss: 3.2750245847702026
Epoch: 10/10 Loss: 3.2779884939193726
Epoch: 10/10 Loss: 3.2674008169174193
Epoch: 10/10 Loss: 3.3014803156852723
Epoch: 10/10 Loss: 3.313063966751099
Epoch: 10/10 Loss: 3.329684274196625
Epoch: 10/10 Loss: 3.318878095149994
Epoch: 10/10 Loss: 3.330470955371857
Epoch: 10/10 Loss: 3.3408906922340393
Epoch: 10/10 Loss: 3.346896275520325
Epoch: 10/10 Loss: 3.3508639421463013
Model Trained and Saved
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** I went with the value for n_layers already given but I reduced that to 2 as the model wasn't improving with 4 hidden layers. I also went with 10 epochs which was an overkill as the loss didn't improve significantly in the later epochs. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
# https://stackoverflow.com/questions/53900910/typeerror-can-t-convert-cuda-tensor-to-numpy-use-tensor-cpu-to-copy-the-tens
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cuda() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
if train_on_gpu:
top_i = top_i.cpu().numpy().squeeze()
else:
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
if train_on_gpu:
p = p.cpu().numpy().squeeze()
else:
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq.cpu(), -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'elaine' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:53: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}
'.format(np.average(word_count_line)))
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
_____no_output_____
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
# return tuple
return (None, None)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
_____no_output_____
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
_____no_output_____
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# return a dataloader
return None
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
_____no_output_____
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
# define model layers
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# return one batch of output word scores and the hidden state
return None, None
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
_____no_output_____
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
# perform backpropagation and optimization
# return the loss over a batch and the hidden state produced by our model
return None, None
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
_____no_output_____
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = # of words in a sequence
# Batch Size
batch_size =
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs =
# Learning Rate
learning_rate =
# Model parameters
# Vocab size
vocab_size =
# Output size
output_size =
# Embedding Dimension
embedding_dim =
# Hidden Dimension
hidden_dim =
# Number of RNN Layers
n_layers =
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
_____no_output_____
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
_____no_output_____
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
words =tuple(set(text))
print("UNIQUE_WORDS,WORDS",len(words),len(list(text)))
vocab_to_int = {j:i+1 for i,j in (enumerate(words))}
int_to_vocab = {i+1:j for i,j in (enumerate(words))}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
UNIQUE_WORDS,WORDS 71 104
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
dic ={".":"||period||",",":"||comma||","\"":"||quotation_Mark||", \
";":"||semicolon||","!":"||exclamation_mark||","?":"||question_mark||", \
"(":"||left_Parentheses||",")":"||right_Parentheses||","-":"||dash||", \
"\n":"||return||"}
return dic
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
UNIQUE_WORDS,WORDS 21388 892111
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
print(int_text[:100])
###Output
[6827, 16899, 14656, 11245, 11245, 11245, 3713, 14656, 16899, 5574, 6465, 19096, 533, 933, 1508, 20850, 6465, 2015, 11245, 20273, 11245, 11245, 11245, 692, 15011, 7188, 5553, 20273, 12593, 3958, 9229, 14284, 16250, 14656, 14132, 6827, 16899, 12419, 7121, 12593, 3958, 11245, 11245, 11245, 6827, 185, 1412, 16845, 8206, 11223, 14656, 9134, 16845, 167, 5574, 16899, 6761, 11245, 18425, 5574, 20454, 16725, 16899, 6761, 16845, 8206, 11223, 14656, 20632, 19314, 20626, 20273, 2094, 20736, 7248, 12943, 16845, 3597, 20975, 3868, 21243, 9229, 20626, 11245, 17269, 10972, 14349, 21021, 18784, 3467, 692, 15011, 5968, 14132, 16845, 6200, 9626, 7248, 9036, 11245]
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
No GPU found. Please use a GPU to train your neural network.
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
print("Seq_length Batch_size",sequence_length,batch_size)
print("Word type",type(words),len(words))
x_list = []
y_list = []
#n_batches = len(words)//(batch_size)
#print("No_batches",n_batches)
#words = words[:n_batches*batch_size]
no_of_iter = (len(words)-sequence_length)//batch_size
#print("no_of_iter_btch_size",no_of_iter*batch_size)
words =np.array(list(words))
for i in range(0,len(words)-sequence_length):
if(i<no_of_iter*batch_size):
x = words[i:i+sequence_length]
y = words[i+sequence_length]
x_list.append(x)
y_list.append(y)
#print(x_list)
#print(y_list)
data = TensorDataset(torch.from_numpy(np.array(x_list)), torch.from_numpy(np.array(
y_list)))
data_loader = DataLoader(data, shuffle=True, batch_size=batch_size)
return data_loader
def test_batch_data(lst, seq_len, batch_size, expected_nb_batches, expected_nb_examples):
nb_batches = 0
nb_examples = 0
dl = batch_data(lst, seq_len, batch_size)
for x, y in dl:
print(x)
print(y)
nb_batches += 1
nb_examples += x.size(0)
assert x.size() == (batch_size, seq_len), " x.size(): {} found, expected {}".format(list(x.size()), [batch_size, seq_len])
assert y.size() == (batch_size,), "y.size(): {} found, expected {}".format(y.size(), (batch_size,))
assert expected_nb_batches == nb_batches, "nb_batches: {}, expected {}".format(nb_batches, expected_nb_batches)
assert expected_nb_examples == nb_examples, "nb_examples: {}, expected {}".format(nb_examples, expected_nb_examples)
print("Done!")
test_batch_data(list(range(0, 20)), 6, 4, expected_nb_batches=3, expected_nb_examples=12)
test_batch_data(list(range(0, 20)), 4, 5, expected_nb_batches=3, expected_nb_examples=15)
test_batch_data(list(range(0, 10)), 3, 3, expected_nb_batches=2, expected_nb_examples=6)
###Output
Seq_length Batch_size 6 4
Word type <class 'list'> 20
tensor([[ 2, 3, 4, 5, 6, 7],
[ 3, 4, 5, 6, 7, 8],
[ 10, 11, 12, 13, 14, 15],
[ 5, 6, 7, 8, 9, 10]])
tensor([ 8, 9, 16, 11])
tensor([[ 11, 12, 13, 14, 15, 16],
[ 4, 5, 6, 7, 8, 9],
[ 0, 1, 2, 3, 4, 5],
[ 8, 9, 10, 11, 12, 13]])
tensor([ 17, 10, 6, 14])
tensor([[ 7, 8, 9, 10, 11, 12],
[ 6, 7, 8, 9, 10, 11],
[ 9, 10, 11, 12, 13, 14],
[ 1, 2, 3, 4, 5, 6]])
tensor([ 13, 12, 15, 7])
Done!
Seq_length Batch_size 4 5
Word type <class 'list'> 20
tensor([[ 2, 3, 4, 5],
[ 10, 11, 12, 13],
[ 8, 9, 10, 11],
[ 13, 14, 15, 16],
[ 1, 2, 3, 4]])
tensor([ 6, 14, 12, 17, 5])
tensor([[ 3, 4, 5, 6],
[ 6, 7, 8, 9],
[ 14, 15, 16, 17],
[ 7, 8, 9, 10],
[ 9, 10, 11, 12]])
tensor([ 7, 10, 18, 11, 13])
tensor([[ 5, 6, 7, 8],
[ 4, 5, 6, 7],
[ 11, 12, 13, 14],
[ 0, 1, 2, 3],
[ 12, 13, 14, 15]])
tensor([ 9, 8, 15, 4, 16])
Done!
Seq_length Batch_size 3 3
Word type <class 'list'> 10
tensor([[ 1, 2, 3],
[ 2, 3, 4],
[ 5, 6, 7]])
tensor([ 4, 5, 8])
tensor([[ 4, 5, 6],
[ 0, 1, 2],
[ 3, 4, 5]])
tensor([ 7, 3, 6])
Done!
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
import numpy as np
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=5)
print(t_loader)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
#print()
print(sample_y.shape)
print(sample_y)
###Output
Seq_length Batch_size 5 5
Word type <class 'range'> 50
<torch.utils.data.dataloader.DataLoader object at 0x7fe1cefcb550>
torch.Size([5, 5])
tensor([[ 3, 4, 5, 6, 7],
[ 26, 27, 28, 29, 30],
[ 4, 5, 6, 7, 8],
[ 11, 12, 13, 14, 15],
[ 21, 22, 23, 24, 25]])
torch.Size([5])
tensor([ 8, 31, 9, 16, 26])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
self.vocab_size = vocab_size
self.output_size = output_size
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.n_layers = n_layers
self.output_size = output_size
self.embed = nn.Embedding(vocab_size,embedding_dim)
self.lst = nn.LSTM(embedding_dim,hidden_dim,n_layers,batch_first =True,
dropout=dropout)
# set class variables
self.dropout = nn.Dropout(dropout)
self.fc = nn.Linear(hidden_dim,output_size)
# define model layers
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
batch_size =nn_input.shape[0]
# TODO: Implement function
x = self.embed(nn_input)
self.lst.flatten_parameters()
#print("embedded shape",x.shape)
out_ls,hidden = self.lst(x,hidden)
out_ls = out_ls.contiguous().view(-1, self.hidden_dim)
out_ls = self.fc(out_ls)
out_ls = out_ls.view(batch_size,-1,self.output_size)
out_ls =out_ls[:,-1]
# return one batch of output word scores and the hidden state
return out_ls, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
# initialize hidden state with zero weights, and move to GPU if available
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
# perform backpropagation and optimization
rnn.zero_grad()
if(train_on_gpu):
inp,target =inp.cuda(),target.cuda()
hidden = tuple([each.data for each in hidden])
output,hidden = rnn(inp,hidden)
#print(out.squeeze().shape)
#print(output)
loss =criterion(output.squeeze(), target)
#print(type(loss.item()))
loss.backward()
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs,scheduler, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
scheduler.step()
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
#print(batch_i)
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length =25 # of words in a sequence
# Batch Size
batch_size = 256
print(len(int_text))
print("UNique",len(tuple(set(int_text))))
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size =len(tuple(set(int_text)))+50
# Output size
output_size =len(tuple(set(int_text)))+50
# Embedding Dimension
embedding_dim = 256
# Hidden Dimension
hidden_dim = 512
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 400
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from torch.optim import lr_scheduler
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
scheduler = lr_scheduler.StepLR(optimizer, step_size=4, gamma=0.5)
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs,scheduler,show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn_no_rate_decay', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 5.380095920562744
Epoch: 1/10 Loss: 4.67059530377388
Epoch: 1/10 Loss: 4.44909031689167
Epoch: 1/10 Loss: 4.341865438818932
Epoch: 1/10 Loss: 4.2643080455064775
Epoch: 1/10 Loss: 4.2001852935552595
Epoch: 1/10 Loss: 4.139560710787773
Epoch: 1/10 Loss: 4.1191013193130495
Epoch: 2/10 Loss: 3.982674285682321
Epoch: 2/10 Loss: 3.903726255297661
Epoch: 2/10 Loss: 3.8717657566070556
Epoch: 2/10 Loss: 3.8514534956216813
Epoch: 2/10 Loss: 3.858292981982231
Epoch: 2/10 Loss: 3.8468697571754458
Epoch: 2/10 Loss: 3.8244822961091995
Epoch: 2/10 Loss: 3.8217316102981567
Epoch: 3/10 Loss: 3.7044957241816827
Epoch: 3/10 Loss: 3.633731118440628
Epoch: 3/10 Loss: 3.633208695650101
Epoch: 3/10 Loss: 3.6369596099853516
Epoch: 3/10 Loss: 3.6234382712841033
Epoch: 3/10 Loss: 3.6437795829772948
Epoch: 3/10 Loss: 3.6284162056446077
Epoch: 3/10 Loss: 3.6223182845115662
Epoch: 4/10 Loss: 3.5003589956383956
Epoch: 4/10 Loss: 3.4337933540344237
Epoch: 4/10 Loss: 3.4354637718200682
Epoch: 4/10 Loss: 3.4708599841594694
Epoch: 4/10 Loss: 3.4693118995428085
Epoch: 4/10 Loss: 3.462015705704689
Epoch: 4/10 Loss: 3.4584631085395814
Epoch: 4/10 Loss: 3.459835723042488
Epoch: 5/10 Loss: 3.338018147917519
Epoch: 5/10 Loss: 3.2398352110385895
Epoch: 5/10 Loss: 3.247293289899826
Epoch: 5/10 Loss: 3.2423013067245483
Epoch: 5/10 Loss: 3.248152292370796
Epoch: 5/10 Loss: 3.256008203625679
Epoch: 5/10 Loss: 3.2589075881242753
Epoch: 5/10 Loss: 3.2476092630624773
Epoch: 6/10 Loss: 3.1780121971292106
Epoch: 6/10 Loss: 3.142121150493622
Epoch: 6/10 Loss: 3.1441518980264664
Epoch: 6/10 Loss: 3.1487180894613265
Epoch: 6/10 Loss: 3.153564688563347
Epoch: 6/10 Loss: 3.1675422579050063
Epoch: 6/10 Loss: 3.166737269759178
Epoch: 6/10 Loss: 3.1866508322954177
Epoch: 7/10 Loss: 3.1127506281897337
Epoch: 7/10 Loss: 3.068658108711243
Epoch: 7/10 Loss: 3.0689715522527696
Epoch: 7/10 Loss: 3.075708549618721
Epoch: 7/10 Loss: 3.093345568776131
Epoch: 7/10 Loss: 3.092230723500252
Epoch: 7/10 Loss: 3.0976219362020494
Epoch: 7/10 Loss: 3.106599559187889
Epoch: 8/10 Loss: 3.029369626826013
Epoch: 8/10 Loss: 2.9889594382047653
Epoch: 8/10 Loss: 3.001965739130974
Epoch: 8/10 Loss: 3.007034165263176
Epoch: 8/10 Loss: 3.02835094332695
Epoch: 8/10 Loss: 3.0442234337329865
Epoch: 8/10 Loss: 3.053923916220665
Epoch: 8/10 Loss: 3.0556305766105654
Epoch: 9/10 Loss: 2.9651755927598966
Epoch: 9/10 Loss: 2.8981575787067415
Epoch: 9/10 Loss: 2.9150980657339094
Epoch: 9/10 Loss: 2.926441668868065
Epoch: 9/10 Loss: 2.914849742650986
Epoch: 9/10 Loss: 2.9352423095703126
Epoch: 9/10 Loss: 2.9422347676753997
Epoch: 9/10 Loss: 2.9390135484933855
Epoch: 10/10 Loss: 2.8942695148507056
Epoch: 10/10 Loss: 2.8809356051683426
Epoch: 10/10 Loss: 2.86645024895668
Epoch: 10/10 Loss: 2.878709568977356
Epoch: 10/10 Loss: 2.899056983590126
Epoch: 10/10 Loss: 2.896891574263573
Epoch: 10/10 Loss: 2.8919065654277802
Epoch: 10/10 Loss: 2.9161301946640013
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** 1.Earlier I had used a large sequence length which was causing a delay in the computation. So taking the time into consideration I chose a sequence length of 25.As Andrew Karparthy had pointed out that the number of layer should be 2-3 in RNN so I chose it to be 2.The hidden dimension were chosen to be large because the neural network had to learn the data so I chose it to be 512.When smaller hidden dimension of 128 was used they it the Neural Network was in high bias region. So I increased it to 512. The learning rate was one of the very important parameter to tune. When learning rate was high the model was not converging. So I decreased the learning rate to 0.001. The number of epoch used were 10 only to get a training loss of 2.916.Earlier I had used a batch size of 128 then the loss was noisy(increasing and decreasing) so I finally decided to increase the batch size to 256 so that the training loss graph becomes smooth. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
#trained_rnn = helper.load_model('./save/trained_rnn_no_rate_decay')
trained_rnn = torch.load('./trained_rnn.pt', map_location=lambda storage, loc: storage)
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
print(predicted)
print("*******************")
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
#print(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
#print(gen_sentences)
#print("&&&&&&&&&&&&&&&&&&&&&&&&&&&")
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
#print(pad_word)
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
_____no_output_____
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
from collections import Counter
counts = Counter(text)
vocab = sorted(counts,key=counts.get,reverse=True)
vocab_to_int = {word:ii for ii, word in enumerate(vocab,1)}
int_to_vocab = {ii:word for ii, word in enumerate(vocab,1)}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
vocab_to_int, int_to_vocab=create_lookup_tables(text)
vocab_to_int
int_to_vocab
###Output
_____no_output_____
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
symbols = ['.', ',', '"', ';', '!', '?', '(', ')', '-', '\n']
values = ["||Period||","||Comma||","||Quotation_Mark||","||Semicolon||","||Exclamation_Mark||","||Question_Mark||","||Left_Parentheses||","||Right_Parentheses||","||Dash||","||Return||"]
tokenized_punct = {sym:val for sym in symbols for val in values}
return tokenized_punct
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
token_lookup()
###Output
_____no_output_____
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
batches = len(words) // batch_size
# splice list of words using batchsizes
words = words[:batches * batch_size]
#loop through all the words and place them into separate features and targets
features = []
target = []
for i in range(len(words) - sequence_length):
features.append(words[i : i + sequence_length])
target.append(words[i + sequence_length])
#add data into dataset and dataloader
data_set = TensorDataset(torch.from_numpy(np.asarray(features)), torch.from_numpy(np.asarray(target)))
data_loader = torch.utils.data.DataLoader(data_set, shuffle=False, batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]])
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define model layers
#embedding and lstm,linear and sigmoid layers
self.embedding = nn.Embedding(vocab_size,embedding_dim)
self.lstm = nn.LSTM(embedding_dim,hidden_dim,n_layers,dropout=dropout, batch_first=True)
self.fc = nn.Linear(hidden_dim, output_size)
self.sig = nn.Sigmoid()
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
embedding_out = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embedding_out, hidden)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
model_out = self.fc(lstm_out)
model_out = model_out.view(nn_input.size(0), -1, self.output_size)
model_out = model_out[:, -1]
# return one batch of output word scores and the hidden state
return model_out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if(train_on_gpu):
rnn.cuda()
inp = inp.cuda()
target = target.cuda()
# perform backpropagation and optimization
h = tuple([el.data for el in hidden])
rnn.zero_grad()
output, h = rnn(inp, h)
loss = criterion(output, target)
loss.backward()
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), h
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 100
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 5
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 300
# Hidden Dimension
hidden_dim = 230
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 5 epoch(s)...
Epoch: 1/5 Loss: 5.181810588359832
Epoch: 1/5 Loss: 4.761937874794007
Epoch: 1/5 Loss: 4.507635155200958
Epoch: 1/5 Loss: 4.3362547564506535
Epoch: 1/5 Loss: 4.214848518371582
Epoch: 1/5 Loss: 4.332943567276001
Epoch: 1/5 Loss: 4.280406264305115
Epoch: 1/5 Loss: 4.343973028182983
Epoch: 1/5 Loss: 4.233111576080322
Epoch: 1/5 Loss: 4.212460086345673
Epoch: 1/5 Loss: 4.0091156463623046
Epoch: 1/5 Loss: 4.155004780769348
Epoch: 1/5 Loss: 4.100644754886627
Epoch: 1/5 Loss: 4.179209367275238
Epoch: 1/5 Loss: 4.22897621011734
Epoch: 1/5 Loss: 4.221843630313873
Epoch: 1/5 Loss: 4.187346902370453
Epoch: 2/5 Loss: 4.016450043346571
Epoch: 2/5 Loss: 3.8639565262794493
Epoch: 2/5 Loss: 3.8212383608818055
Epoch: 2/5 Loss: 3.7240742402076723
Epoch: 2/5 Loss: 3.6664706873893738
Epoch: 2/5 Loss: 3.802676923751831
Epoch: 2/5 Loss: 3.7680643882751466
Epoch: 2/5 Loss: 3.871439549446106
Epoch: 2/5 Loss: 3.781108927726746
Epoch: 2/5 Loss: 3.771136235713959
Epoch: 2/5 Loss: 3.624932171344757
Epoch: 2/5 Loss: 3.7446896319389342
Epoch: 2/5 Loss: 3.7066690244674683
Epoch: 2/5 Loss: 3.7910688548088074
Epoch: 2/5 Loss: 3.832498989582062
Epoch: 2/5 Loss: 3.8242098665237427
Epoch: 2/5 Loss: 3.811536384105682
Epoch: 3/5 Loss: 3.726476957227873
Epoch: 3/5 Loss: 3.6502626142501833
Epoch: 3/5 Loss: 3.6168304510116576
Epoch: 3/5 Loss: 3.5480709390640257
Epoch: 3/5 Loss: 3.488213384628296
Epoch: 3/5 Loss: 3.5959919056892393
Epoch: 3/5 Loss: 3.594506411075592
Epoch: 3/5 Loss: 3.691841463088989
Epoch: 3/5 Loss: 3.5948741245269775
Epoch: 3/5 Loss: 3.5862636632919314
Epoch: 3/5 Loss: 3.4744590702056883
Epoch: 3/5 Loss: 3.574137797355652
Epoch: 3/5 Loss: 3.530224638462067
Epoch: 3/5 Loss: 3.61676025056839
Epoch: 3/5 Loss: 3.6702242856025697
Epoch: 3/5 Loss: 3.658023064136505
Epoch: 3/5 Loss: 3.6524762167930604
Epoch: 4/5 Loss: 3.5726160324138143
Epoch: 4/5 Loss: 3.527915768623352
Epoch: 4/5 Loss: 3.4881118440628054
Epoch: 4/5 Loss: 3.4276241149902344
Epoch: 4/5 Loss: 3.3685462641716004
Epoch: 4/5 Loss: 3.4655395698547364
Epoch: 4/5 Loss: 3.467364187717438
Epoch: 4/5 Loss: 3.5770102009773255
Epoch: 4/5 Loss: 3.4727519826889037
Epoch: 4/5 Loss: 3.4620379576683042
Epoch: 4/5 Loss: 3.361168047904968
Epoch: 4/5 Loss: 3.4462191457748412
Epoch: 4/5 Loss: 3.4130006065368654
Epoch: 4/5 Loss: 3.513098111629486
Epoch: 4/5 Loss: 3.551846893787384
Epoch: 4/5 Loss: 3.5437567119598388
Epoch: 4/5 Loss: 3.5235899271965025
Epoch: 5/5 Loss: 3.4799447914828425
Epoch: 5/5 Loss: 3.449487123012543
Epoch: 5/5 Loss: 3.4066185812950134
Epoch: 5/5 Loss: 3.340015625476837
Epoch: 5/5 Loss: 3.2913143305778503
Epoch: 5/5 Loss: 3.381657392024994
Epoch: 5/5 Loss: 3.3818252635002137
Epoch: 5/5 Loss: 3.492859775543213
Epoch: 5/5 Loss: 3.3845377788543702
Epoch: 5/5 Loss: 3.3782970786094664
Epoch: 5/5 Loss: 3.28645951461792
Epoch: 5/5 Loss: 3.364845615386963
Epoch: 5/5 Loss: 3.3303483180999756
Epoch: 5/5 Loss: 3.421983045101166
Epoch: 5/5 Loss: 3.4754480834007264
Epoch: 5/5 Loss: 3.4561420121192934
Epoch: 5/5 Loss: 3.4427926907539366
Model Trained and Saved
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? I tried using a small sequence lengths with a small batch size but adjusted by increasing the batch size so the model as more words to train on per batch. I also adjusted the learning rate in order to help increase accuracy. I chose the hidden_dim and n_layers based on the RNN excercise and adjusted those values slightly to see what worked best but overall it helped achieve best accuracy for the model. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
import os
os.environ['CUDA_LAUNCH_BLOCKING'] = '1'
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
_____no_output_____
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_counts = Counter(text)
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
int_to_vocab = {i: word for i, word in enumerate(sorted_vocab)}
vocab_to_int = {word: i for i, word in int_to_vocab.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
tokens = {
'.': '||period||',
',': '||comma||',
'"': '||quotemark||',
';': '||semicolon||',
'!': '||exclammark||',
'?': '||questionmark||',
'(': '||leftparen||',
')': '||rightparen||',
'-': '||dash||',
'\n': '||return||'
}
return tokens
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
n_targets = len(words) - sequence_length
feature, target = [], []
for idx in range(n_targets):
feature.append(words[idx: idx + sequence_length])
target.append(words[idx + sequence_length])
# print(feature[:10])
# print(target[:10])
# create tensor dataset
data = TensorDataset(torch.from_numpy(np.asarray(feature)),
torch.from_numpy(np.asarray(target)))
# create and return dataloader
data_loader = DataLoader(data, shuffle=False, batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
test = [1, 2, 3, 4, 5, 6, 7]
batch_data(test, 4, 3)
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]])
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.vocab_size = vocab_size
self.output_size = output_size
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.n_layers = n_layers
self.dropout = dropout
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout = dropout, batch_first = True)
self.fc = nn.Linear(hidden_dim, output_size)
self.dropout = nn.Dropout(dropout)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
# embeddings and lstm_out
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
# add lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
# final_output = self.dropout(lstm_out)
final_output = self.fc(lstm_out)
# reshape into (batch_size, seq_length, output_size)
final_output = final_output.view(batch_size, -1, self.output_size)
# get last batch
final_output = final_output[:, -1]
# return one batch of output word scores and the hidden state
return final_output, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if(train_on_gpu):
rnn.cuda()
inp = inp.cuda()
target = target.cuda()
# perform backpropagation and optimization
hidden = tuple([each.data for each in hidden])
# clear grad
rnn.zero_grad()
# run rnn
output, hidden = rnn(inp, hidden)
# calculate loss, and run backpropagation
loss = criterion(output, target)
loss.backward()
# avoid the exploding gradient
clip = 5
nn.utils.clip_grad_norm_(rnn.parameters(), clip)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 150
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 300
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 5.529310809612274
Epoch: 1/10 Loss: 4.868192484378815
Epoch: 1/10 Loss: 4.583770613193512
Epoch: 1/10 Loss: 4.595194550991058
Epoch: 1/10 Loss: 4.5430461149215695
Epoch: 1/10 Loss: 4.4930935263633724
Epoch: 1/10 Loss: 4.348288031578064
Epoch: 1/10 Loss: 4.336965585231781
Epoch: 1/10 Loss: 4.30256972026825
Epoch: 1/10 Loss: 4.4140385413169865
Epoch: 1/10 Loss: 4.409925877094269
Epoch: 2/10 Loss: 4.19606430981202
Epoch: 2/10 Loss: 4.019548659801483
Epoch: 2/10 Loss: 3.8814735498428345
Epoch: 2/10 Loss: 3.9618682022094727
Epoch: 2/10 Loss: 3.971702713012695
Epoch: 2/10 Loss: 3.9697001032829284
Epoch: 2/10 Loss: 3.8743876752853392
Epoch: 2/10 Loss: 3.8787233324050905
Epoch: 2/10 Loss: 3.847103949069977
Epoch: 2/10 Loss: 3.9560626306533813
Epoch: 2/10 Loss: 3.9548972272872924
Epoch: 3/10 Loss: 3.858106634307185
Epoch: 3/10 Loss: 3.7653660554885864
Epoch: 3/10 Loss: 3.655870846271515
Epoch: 3/10 Loss: 3.722901219844818
Epoch: 3/10 Loss: 3.7476988835334777
Epoch: 3/10 Loss: 3.7488549032211305
Epoch: 3/10 Loss: 3.6617639536857607
Epoch: 3/10 Loss: 3.662154232978821
Epoch: 3/10 Loss: 3.635809054374695
Epoch: 3/10 Loss: 3.750231619358063
Epoch: 3/10 Loss: 3.735472125053406
Epoch: 4/10 Loss: 3.672470674847096
Epoch: 4/10 Loss: 3.599847785949707
Epoch: 4/10 Loss: 3.5190424242019653
Epoch: 4/10 Loss: 3.560512351036072
Epoch: 4/10 Loss: 3.591423318862915
Epoch: 4/10 Loss: 3.6137986092567442
Epoch: 4/10 Loss: 3.508136905670166
Epoch: 4/10 Loss: 3.522643671989441
Epoch: 4/10 Loss: 3.5122790293693544
Epoch: 4/10 Loss: 3.616040725708008
Epoch: 4/10 Loss: 3.590441781044006
Epoch: 5/10 Loss: 3.5479151077230227
Epoch: 5/10 Loss: 3.4855239949226378
Epoch: 5/10 Loss: 3.4174019122123718
Epoch: 5/10 Loss: 3.4489142622947693
Epoch: 5/10 Loss: 3.48048273563385
Epoch: 5/10 Loss: 3.516224744796753
Epoch: 5/10 Loss: 3.4033279757499697
Epoch: 5/10 Loss: 3.4352164211273193
Epoch: 5/10 Loss: 3.415852860927582
Epoch: 5/10 Loss: 3.5154579553604126
Epoch: 5/10 Loss: 3.4879583950042723
Epoch: 6/10 Loss: 3.466111128030634
Epoch: 6/10 Loss: 3.4112982540130616
Epoch: 6/10 Loss: 3.3276610789299013
Epoch: 6/10 Loss: 3.348341537952423
Epoch: 6/10 Loss: 3.4047589569091796
Epoch: 6/10 Loss: 3.434948842525482
Epoch: 6/10 Loss: 3.3213865656852724
Epoch: 6/10 Loss: 3.3389402017593386
Epoch: 6/10 Loss: 3.333437083244324
Epoch: 6/10 Loss: 3.4354339814186097
Epoch: 6/10 Loss: 3.397718542098999
Epoch: 7/10 Loss: 3.3891836554598784
Epoch: 7/10 Loss: 3.3408581585884094
Epoch: 7/10 Loss: 3.26220672082901
Epoch: 7/10 Loss: 3.2768125886917114
Epoch: 7/10 Loss: 3.3408994336128233
Epoch: 7/10 Loss: 3.3691354298591616
Epoch: 7/10 Loss: 3.2598521389961244
Epoch: 7/10 Loss: 3.2744753460884093
Epoch: 7/10 Loss: 3.270019622325897
Epoch: 7/10 Loss: 3.3687969222068785
Epoch: 7/10 Loss: 3.3324436416625978
Epoch: 8/10 Loss: 3.3327781237669707
Epoch: 8/10 Loss: 3.2885419187545777
Epoch: 8/10 Loss: 3.2035139918327333
Epoch: 8/10 Loss: 3.2243862705230715
Epoch: 8/10 Loss: 3.284676920890808
Epoch: 8/10 Loss: 3.3176104331016543
Epoch: 8/10 Loss: 3.208045747756958
Epoch: 8/10 Loss: 3.2179781918525694
Epoch: 8/10 Loss: 3.2211408581733703
Epoch: 8/10 Loss: 3.316591622829437
Epoch: 8/10 Loss: 3.275346101760864
Epoch: 9/10 Loss: 3.285159317719521
Epoch: 9/10 Loss: 3.2380096702575685
Epoch: 9/10 Loss: 3.1613839750289916
Epoch: 9/10 Loss: 3.1752956504821777
Epoch: 9/10 Loss: 3.235776946544647
Epoch: 9/10 Loss: 3.272580080032349
Epoch: 9/10 Loss: 3.1652264504432677
Epoch: 9/10 Loss: 3.1764199013710024
Epoch: 9/10 Loss: 3.1848195600509643
Epoch: 9/10 Loss: 3.2628794412612914
Epoch: 9/10 Loss: 3.241360436439514
Epoch: 10/10 Loss: 3.2420118245303065
Epoch: 10/10 Loss: 3.195278388500214
Epoch: 10/10 Loss: 3.1218719692230223
Epoch: 10/10 Loss: 3.136446934223175
Epoch: 10/10 Loss: 3.2008880047798156
Epoch: 10/10 Loss: 3.227733317375183
Epoch: 10/10 Loss: 3.120765627384186
Epoch: 10/10 Loss: 3.1420690217018126
Epoch: 10/10 Loss: 3.1429441833496092
Epoch: 10/10 Loss: 3.226587522983551
Epoch: 10/10 Loss: 3.1974760723114013
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here)Yes, it seems the short sequence is faster. Finally I choose 10 words as a sequence length.According to the previous lessons, the n_layers of LSTM is better between 1~3, and the hidden_dim is normally 128, 256, 512, and so on. So I choose n_layers is 2 and hidden_dim is 256. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'steven' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:44: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
No GPU found. Please use a GPU to train your neural network.
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
import numpy as np
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
n_batches = len(words) // batch_size
words = words[:(n_batches * batch_size)]
features = []
targets = []
for i in range(len(words) - sequence_length):
features.append(words[i:(i + sequence_length)])
targets.append(words[i + sequence_length])
feature_tensor = torch.from_numpy(np.asarray(features))
target_tensor = torch.from_numpy(np.asarray(targets))
data = TensorDataset(feature_tensor, target_tensor)
data_loader = DataLoader(data, shuffle = True, batch_size = batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
_test_loader = batch_data(int_text, sequence_length = 10, batch_size = 10)
_feature, _target = next(iter(_test_loader))
for _batch in _feature:
print(' '.join([int_to_vocab[i] for i in _batch.numpy()]))
###Output
which part ||question_mark|| the renovating the restaurant you dont own
pit ||question_mark|| ||return|| ||return|| kramer: ||left_parentheses|| fake laugh ||right_parantheses|| look
elaine: but he's okay ||question_mark|| ||return|| ||return|| jerry: yeah but
leave the apartment ||period|| it's almost like she doesn't wanna
some other time ||period|| ||return|| ||return|| kramer: what tonight ||question_mark||
that is so ridiculous ||period|| ||return|| ||return|| jerry: come on
jerry: ||left_parentheses|| a little confused ||right_parantheses|| you wanna hang out
a new one i'll send you back this one ||period||
kramer: ||left_parentheses|| holds up some small white sachets ||right_parantheses|| i
do me a favor ||comma|| could ya tape the rest
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[28, 29, 30, 31, 32],
[30, 31, 32, 33, 34],
[ 4, 5, 6, 7, 8],
[34, 35, 36, 37, 38],
[32, 33, 34, 35, 36],
[14, 15, 16, 17, 18],
[29, 30, 31, 32, 33],
[25, 26, 27, 28, 29],
[12, 13, 14, 15, 16],
[37, 38, 39, 40, 41]])
torch.Size([10])
tensor([33, 35, 9, 39, 37, 19, 34, 30, 17, 42])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.hidden_dim = hidden_dim
self.n_layers = n_layers
# define model layers
self.embed = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout = dropout, batch_first = True)
#self.dropout = nn.Dropout(0.3)
self.fc = nn.Linear(hidden_dim, vocab_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
#nn_input = nn_input.long()
embeds = self.embed(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
#out = self.dropout(lstm_out)
out = self.fc(lstm_out)
out = out.view(batch_size, -1, self.output_size)
out = out[:, -1] # we're only interested in the last
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(), weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(), weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if train_on_gpu:
inp, target = inp.cuda(), target.cuda()
# creating new variables for the hidden state, otherwise we'd backprop through the entire training history
hidden = tuple([each.data for each in hidden])
# perform backpropagation and optimization
rnn.zero_grad()
output, hidden = rnn(inp, hidden)
loss = criterion(output, target)
loss.backward()
#nn.utils.clip_grad_norm_(rnn.parameters(), clip)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 20 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 1
# Learning Rate
learning_rate = 1e-3
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
_____no_output_____
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
_____no_output_____
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
counts = Counter(text)
# Need to sort words from most to least frequent
vocab_sorted = sorted(counts, key=counts.get, reverse=True)
# create int_to_vocab dictionaries
int_to_vocab = {ii: word for ii, word in enumerate(vocab_sorted)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
tokens = dict()
tokens['.'] = '<period>'
tokens[','] = '<comma>'
tokens['"'] = '<quotation_mark>'
tokens[';'] = '<semicolon>'
tokens['?'] = '<question_mark>'
tokens['!'] = '<exclamation_mark>'
tokens['('] = '<left_paren>'
tokens[')'] = '<right_paren>'
tokens['-'] = '<dash>'
tokens['\n'] = '<new_line>'
return tokens
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
#initialise
feature_tensor, target_tensor = [], []
#create data with TensorDataset
for i in range(len(words)):
target_idx = i + sequence_length
if target_idx < len(words):
features = words[i:i + sequence_length]
feature_tensor.append(features)
target = words[target_idx]
target_tensor.append(target)
data_set = TensorDataset(
torch.from_numpy(np.array(feature_tensor)),
torch.from_numpy(np.array(target_tensor))
)
#create dataloader
data_loader = DataLoader(data_set, batch_size=batch_size, shuffle=True)
# return a dataloader
return data_loader
### Test your dataloader below for printing and testing
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 19, 20, 21, 22, 23],
[ 13, 14, 15, 16, 17],
[ 42, 43, 44, 45, 46],
[ 3, 4, 5, 6, 7],
[ 18, 19, 20, 21, 22],
[ 6, 7, 8, 9, 10],
[ 38, 39, 40, 41, 42],
[ 37, 38, 39, 40, 41],
[ 16, 17, 18, 19, 20],
[ 15, 16, 17, 18, 19]])
torch.Size([10])
tensor([ 24, 18, 47, 8, 23, 11, 43, 42, 21, 20])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# define embedding layer
self.embedding = nn.Embedding(vocab_size, embedding_dim)
## Define the LSTM
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# Define the final, fully-connected output layer
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
batch_size = nn_input.size(0)
# embeddings and lstm_out
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
out = self.fc(lstm_out)
# reshape into (batch_size, seq_length, output_size)
out = out.view(batch_size, -1, self.output_size)
# get last batch
out = out[:, -1]
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Create two new tensors with sizes n_layers x batch_size x hidden_dim,
# initialized to zero, for hidden state and cell state of LSTM
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# move model to GPU, if available
if(train_on_gpu):
rnn.cuda()
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
h = tuple([each.data for each in hidden])
# zero accumulated gradients
rnn.zero_grad()
if(train_on_gpu):
inputs, target = inp.cuda(), target.cuda()
# get predicted outputs
output, h = rnn(inputs, h)
# calculate loss
loss = criterion(output, target)
# optimizer.zero_grad()
loss.backward()
# 'clip_grad_norm' helps prevent the exploding gradient problem in RNNs / LSTMs
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
return loss.item(), h
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 250
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 1000
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 5.211868348836899
Epoch: 1/10 Loss: 4.622847301483154
Epoch: 1/10 Loss: 4.430748433828354
Epoch: 1/10 Loss: 4.326240783452987
Epoch: 1/10 Loss: 4.258989337205887
Epoch: 1/10 Loss: 4.204880309343338
Epoch: 2/10 Loss: 4.0973720341058115
Epoch: 2/10 Loss: 3.9711267096996306
Epoch: 2/10 Loss: 3.957555563688278
Epoch: 2/10 Loss: 3.943263653755188
Epoch: 2/10 Loss: 3.9399359176158906
Epoch: 2/10 Loss: 3.922368621110916
Epoch: 3/10 Loss: 3.8426766694651575
Epoch: 3/10 Loss: 3.7671526267528535
Epoch: 3/10 Loss: 3.7748427736759185
Epoch: 3/10 Loss: 3.7617118771076203
Epoch: 3/10 Loss: 3.774270247936249
Epoch: 3/10 Loss: 3.7749944460391998
Epoch: 4/10 Loss: 3.702938645669444
Epoch: 4/10 Loss: 3.6361771178245545
Epoch: 4/10 Loss: 3.6471969335079195
Epoch: 4/10 Loss: 3.6455666177272796
Epoch: 4/10 Loss: 3.6701570279598235
Epoch: 4/10 Loss: 3.676929558515549
Epoch: 5/10 Loss: 3.6110563346006472
Epoch: 5/10 Loss: 3.5416511824131014
Epoch: 5/10 Loss: 3.576618997335434
Epoch: 5/10 Loss: 3.5472562327384947
Epoch: 5/10 Loss: 3.600982535123825
Epoch: 5/10 Loss: 3.593803001642227
Epoch: 6/10 Loss: 3.529087289569946
Epoch: 6/10 Loss: 3.466048729658127
Epoch: 6/10 Loss: 3.484813708782196
Epoch: 6/10 Loss: 3.495199167013168
Epoch: 6/10 Loss: 3.5389509155750276
Epoch: 6/10 Loss: 3.528209956884384
Epoch: 7/10 Loss: 3.474839057941737
Epoch: 7/10 Loss: 3.425573988676071
Epoch: 7/10 Loss: 3.4244571204185488
Epoch: 7/10 Loss: 3.439885439157486
Epoch: 7/10 Loss: 3.47168997836113
Epoch: 7/10 Loss: 3.476027591943741
Epoch: 8/10 Loss: 3.4200368578994618
Epoch: 8/10 Loss: 3.368068772792816
Epoch: 8/10 Loss: 3.3651770911216734
Epoch: 8/10 Loss: 3.407411741733551
Epoch: 8/10 Loss: 3.4263857192993163
Epoch: 8/10 Loss: 3.4460477793216704
Epoch: 9/10 Loss: 3.3887716292608445
Epoch: 9/10 Loss: 3.3302845506668093
Epoch: 9/10 Loss: 3.3480998182296755
Epoch: 9/10 Loss: 3.360688591003418
Epoch: 9/10 Loss: 3.384992310523987
Epoch: 9/10 Loss: 3.4016382229328155
Epoch: 10/10 Loss: 3.3434098266961914
Epoch: 10/10 Loss: 3.3047204802036285
Epoch: 10/10 Loss: 3.3225907707214355
Epoch: 10/10 Loss: 3.332591115951538
Epoch: 10/10 Loss: 3.3545617566108703
Epoch: 10/10 Loss: 3.3662430727481842
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** Going over the course material and examples, I tried with smaller batch_size [32,64] and sequence_[5,10]. I also tried a variety of learning_rate [1, 0.1, 0.01, 0.001]. Following the common approach in the course material, I tried embedding/hidden dimensions of [200,250,300]. The first few trials with small batch sizes performed poorly, with the learning rate and number of epochs making little difference, and returning quite a large loss with slow and nonsignificant reduction. After increasing the parameters, I eventually settled on sequence_length = 10 and batch_size = 128.With these, I then tried different learning rates and noticed lower learning rates and bigger hidden_dim were yielding faster converge. Having also notice plateauing of the loss decrease, I decided to opt for 10 epochs. The final model had • sequence_length = 10• batch_size = 128• learning_rate = 0.1• embedding_dim = 200• hidden_dim = 200 • n_layers = 2 Started with Training for 10 epochs reached Epoch: 10/10 Loss: 3.3047204802036285 --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:41: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
## Build a dictionary that maps words to integers
counts = Counter(text)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
int_to_vocab = {ii: word for ii, word in enumerate(vocab, 1)}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return {'.': '||period||',
',': '||comma||',
'\"': '||quotation||',
';': '||semicolon||',
'!': '||exclamation_mark||',
'?': '||question_mark||',
'(': '||left_parentheses||',
')': '||right_parentheses||',
'-': '||dash||',
'\n': '||return||'
}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
n_batches = len(words)//batch_size
#truncate the last few extra words, only save full batches
words = words[:n_batches * batch_size]
# print(len(words))
# print(sequence_length)
# print(len(words) - sequence_length)
x, y = [], []
for i in range(0, len(words) - sequence_length):
i_end = i + sequence_length
x_batch = words[i:i_end]
y_batch = words[i_end]
x.append(x_batch)
y.append(y_batch)
# create a dataset and dataloader
dataset = TensorDataset(torch.from_numpy(np.asarray(x)), torch.from_numpy(np.asarray(y)))
data_loader = DataLoader(dataset, shuffle=True, batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
# batch_data(int_text[:31], 4, 5)
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[20, 21, 22, 23, 24],
[13, 14, 15, 16, 17],
[22, 23, 24, 25, 26],
[21, 22, 23, 24, 25],
[10, 11, 12, 13, 14],
[38, 39, 40, 41, 42],
[19, 20, 21, 22, 23],
[28, 29, 30, 31, 32],
[23, 24, 25, 26, 27],
[41, 42, 43, 44, 45]], dtype=torch.int32)
torch.Size([10])
tensor([25, 18, 27, 26, 15, 43, 24, 33, 28, 46], dtype=torch.int32)
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,
dropout=dropout, batch_first=True)
# finaly fully connected linear layer
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
batch_size = nn_input.size(0)
# embeddings and lstm_out
embeds = self.embedding(nn_input.long())
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# fully-connected layer
out = self.fc(lstm_out)
# reshape to be batch_size first
out = out.view(batch_size, -1, self.output_size)
out = out[:, -1] # get last batch of labels
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if train_on_gpu:
rnn.cuda()
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
h = tuple([each.data for each in hidden])
# zero accumulated gradients
rnn.zero_grad()
# get the output from the model
output, h = rnn(inp, h)
# calculate the loss and perform backprop
loss = criterion(output.squeeze(), target.long())
loss.backward()
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), h
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 16 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 5
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 350
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 5 epoch(s)...
Epoch: 1/5 Loss: 5.449370867729187
Epoch: 1/5 Loss: 4.807767615795136
Epoch: 1/5 Loss: 4.611214173793793
Epoch: 1/5 Loss: 4.476137000560761
Epoch: 1/5 Loss: 4.422181371212005
Epoch: 1/5 Loss: 4.323347067832946
Epoch: 1/5 Loss: 4.277462675571441
Epoch: 1/5 Loss: 4.244762655258179
Epoch: 1/5 Loss: 4.221954915046692
Epoch: 1/5 Loss: 4.169711175918579
Epoch: 1/5 Loss: 4.154976448059082
Epoch: 1/5 Loss: 4.10861354637146
Epoch: 1/5 Loss: 4.117494498252869
Epoch: 2/5 Loss: 4.0157997418533675
Epoch: 2/5 Loss: 3.9197484226226806
Epoch: 2/5 Loss: 3.9177947392463683
Epoch: 2/5 Loss: 3.8988905987739564
Epoch: 2/5 Loss: 3.8829252243041994
Epoch: 2/5 Loss: 3.866315701007843
Epoch: 2/5 Loss: 3.8654651746749877
Epoch: 2/5 Loss: 3.871019511222839
Epoch: 2/5 Loss: 3.8825811767578124
Epoch: 2/5 Loss: 3.8640567889213564
Epoch: 2/5 Loss: 3.841289692878723
Epoch: 2/5 Loss: 3.8531268496513364
Epoch: 2/5 Loss: 3.848750068664551
Epoch: 3/5 Loss: 3.750009061383807
Epoch: 3/5 Loss: 3.6664652819633483
Epoch: 3/5 Loss: 3.6597061467170717
Epoch: 3/5 Loss: 3.681975291252136
Epoch: 3/5 Loss: 3.6669215922355654
Epoch: 3/5 Loss: 3.677124358654022
Epoch: 3/5 Loss: 3.6883189163208008
Epoch: 3/5 Loss: 3.6848109169006347
Epoch: 3/5 Loss: 3.6760306401252745
Epoch: 3/5 Loss: 3.7014954924583434
Epoch: 3/5 Loss: 3.6787463788986208
Epoch: 3/5 Loss: 3.70131125497818
Epoch: 3/5 Loss: 3.712094340324402
Epoch: 4/5 Loss: 3.613202652409057
Epoch: 4/5 Loss: 3.5351931643486023
Epoch: 4/5 Loss: 3.520432821750641
Epoch: 4/5 Loss: 3.5300108675956725
Epoch: 4/5 Loss: 3.507746780872345
Epoch: 4/5 Loss: 3.5395720262527464
Epoch: 4/5 Loss: 3.545678609371185
Epoch: 4/5 Loss: 3.5577009558677672
Epoch: 4/5 Loss: 3.545559913635254
Epoch: 4/5 Loss: 3.5700124411582945
Epoch: 4/5 Loss: 3.585894054889679
Epoch: 4/5 Loss: 3.578835521697998
Epoch: 4/5 Loss: 3.574189555644989
Epoch: 5/5 Loss: 3.5035635836360868
Epoch: 5/5 Loss: 3.4062910966873168
Epoch: 5/5 Loss: 3.4056600265502928
Epoch: 5/5 Loss: 3.416020705699921
Epoch: 5/5 Loss: 3.406880359649658
Epoch: 5/5 Loss: 3.422125506401062
Epoch: 5/5 Loss: 3.441306739807129
Epoch: 5/5 Loss: 3.4627443494796752
Epoch: 5/5 Loss: 3.452195415973663
Epoch: 5/5 Loss: 3.4728175683021547
Epoch: 5/5 Loss: 3.471756200313568
Epoch: 5/5 Loss: 3.4886891112327576
Epoch: 5/5 Loss: 3.4871754336357115
Model Trained and Saved
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** Well, luckily I didn't have to explore many different hyperparameter values. I chose the sequence_length to be 16, randomly, but I wanted it to be a multiple of 2. For batch size, I selected one of the few most used values, i-e 128 (others being 8, 64, 128, 256). For num_epochs, I chose a relatively small number, i.e. 5, because I wanted to quickly iterate over the dataset and be able to quickly explore different hyperparameters (luckily I didn't have to). I chose a relatively common learning_rate, i.e. 0.001. For the embedding_layer, I choose 200, because in the nanodegree, they suggested to use values between 200 and 500. I chose the smallest possible recommended value, 200, so I could quickly train it. For hidden_dim, I wanted it to be slightly bigger than embedding_dim, so I chose 300 at first. During first try, it gave the loss value of 3.52, so I knew if I increased the size of epoch to be like 6 or 10, it would give a loss value below 3.5. But instead, I tried to change the number of hidden layers just to see if I could get the desired loss value in 5 epochs. I tried 250 value which gave an error 3.56, then I changed it to be 350, and lo and behold, it gave the loss error of 3.48, which is less than the desired value of 3.5 Final Loss: 3.4871754336357115 --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq.cpu(), -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'elaine' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
elaine:
elaine: oh, yeah.
elaine: what?
newman: oh, you can't tell him that you didn't have to get a big deal...
kramer: no, no. no, no. no. no, no. no, no, no. no. no, no. no. no, you can't get the camera.
george: you know, i'm not gonna be able to make the keys.
elaine: what?
elaine: i told you, i can't go.
george: oh, you know, i think i should see him.
elaine: oh, well, you gotta get a little problem, and you can do that.
jerry: i don't know, but you don't even know. you know, you know, you should be able to take that checked.
george: well, it's the first thing, huh?
jerry: yeah.
elaine: oh, i was just curious. i can't tell you that. i was just wondering if you could do it.
jerry: what do you think, the worst part.
elaine: oh, well, i was in the mood for the new york. i was just wondering, but i don't even know how to go.
george: oh, no, i'm not really sure.
elaine: oh! oh, i can't believe i got to talk about.(he is laughing.)
elaine: i told her, i don't think i'm gonna go to the bathroom.
george: well, i think i could. i was wondering if i can tell him.
george: you know, i know, i was in the mood.
kramer:(looking at the door, then he gets) you know, this is all i do, but i don't want to get the hell out of my mind.
jerry: well, i don't know. i mean, you can't tell me.
elaine:(to george) what is that
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
import numpy as np
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
#paramText= text.lower()
#paramText = text.split()
#vocab_to_int, int_to_vocab = create_lookup_tables(paramText)
word_set = set(text)
vocab_to_int = {word: i for i, word in enumerate(word_set)}
int_to_vocab = {i: word for i, word in enumerate(vocab_to_int)}
# return tuple
return (vocab_to_int, int_to_vocab)
# return tuple
return (vocab_to_int, int_to_vocab)
#create_lookup_tables(1)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
values = ['||Period||','||Comma||','||Quotation_Mark||','||Semicolon||','||Exclamation_mark||','||Question_mark||','||Left_Parentheses||','||Right_Parentheses||','||Return||','||Dash||']
keys = ['.', ',', '"', ';', '!', '?', '(', ')','\n','-']
return (dict(zip(keys,values)))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
#n_batches = int(len(words) / (batch_size * sequence_length))
n_targets = len(words) - sequence_length
feature ,target = [],[]
for i in range(n_targets):
x = words[i : i+sequence_length] # get some words from the given list
feature.append(x)
y = words[i+sequence_length] # get the next word to be the target
target.append(y)
feature_tensors=torch.from_numpy(np.array(feature))
target_tensors=torch.from_numpy(np.array(target))
data = TensorDataset(feature_tensors, target_tensors)
data_loader = torch.utils.data.DataLoader(data,batch_size)
#return np.array(list(zip(feature_tensors, target_tensors)))
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
batch_data([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2)
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
print(test_text)
#test_text = [ 28, 29, 30, 31, 32]
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
range(0, 50)
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]])
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# embedding and LSTM layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,
dropout=dropout, batch_first=True)
#self.dropout = nn.Dropout(0.3) # dropout layer
# set class variables
#self.chars = vocab_size
#self.int2char = dict(enumerate(self.chars))
#self.char2int = {ch: ii for ii, ch in self.int2char.items()}
# define model layers
self.fc = nn.Linear(hidden_dim, output_size) # linear
#self.sig = nn.Sigmoid() #sigmoid layers
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
# embeddings and lstm_out
x = nn_input.long()
embeds = self.embedding(x)
r_output, hidden = self.lstm(embeds, hidden)## Get the outputs and the new hidden state from the lstm
out = r_output.contiguous().view(-1, self.hidden_dim )# Stack up LSTM outputs using view# you may need to use contiguous to reshape the output
sig_out = self.fc(out)## put x(input) through the fully-connected layer
#sig_out=out
# reshape into (batch_size, seq_length, output_size)
#sig_out = out.view(batch_size, -1, self.output_size)
sig_out = sig_out.view(batch_size, -1, self.output_size)
sig_out = sig_out[:, -1] # get last batch of labels
# return one batch of output(y) word scores and the hidden state
return sig_out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if(train_on_gpu):
rnn.cuda()
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
h = tuple([each.data for each in hidden])
# zero accumulated gradients
rnn.zero_grad()
# get the output from the model
output, h = rnn(inp, h)
# calculate the loss and perform backprop
loss = criterion(output, target)
loss.backward()
#optimizer = torch.optim.Adam(rnn.parameters(), lr=0.003)
optimizer.step()
#to convert input to tensor
#pool = nn.MaxPool2d(2, stride=2, return_indices=True)
#unpool = nn.MaxUnpool2d(2, stride=2)
#output, indices = pool(input)
#unpool(output, indices)
# return the loss over a batch and the hidden state produced by our model
return loss.item(),h
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length =10 # of words in a sequence
# Batch Size
batch_size = 200
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 150
# Hidden Dimension
hidden_dim = 300
# Number of RNN Layers
n_layers = 3
# Show stats for every n number of batches
show_every_n_batches = 1000
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 5.808836177825928
Epoch: 1/10 Loss: 5.770952031612397
Epoch: 1/10 Loss: 5.29177312040329
Epoch: 1/10 Loss: 4.671103951215744
Epoch: 2/10 Loss: 4.382948416553131
Epoch: 2/10 Loss: 4.240608947277069
Epoch: 2/10 Loss: 4.103333241701126
Epoch: 2/10 Loss: 4.080638859272003
Epoch: 3/10 Loss: 3.986224267580738
Epoch: 3/10 Loss: 3.9209645738601684
Epoch: 3/10 Loss: 3.8340556132793426
Epoch: 3/10 Loss: 3.8579432373046876
Epoch: 4/10 Loss: 3.7793105117262225
Epoch: 4/10 Loss: 3.7384330024719237
Epoch: 4/10 Loss: 3.6481493561267855
Epoch: 4/10 Loss: 3.698090669631958
Epoch: 5/10 Loss: 3.6343697765102125
Epoch: 5/10 Loss: 3.5940310513973235
Epoch: 5/10 Loss: 3.5124545404911043
Epoch: 5/10 Loss: 3.5755082693099975
Epoch: 6/10 Loss: 3.527618736763523
Epoch: 6/10 Loss: 3.492864129304886
Epoch: 6/10 Loss: 3.418653825044632
Epoch: 6/10 Loss: 3.4798633608818053
Epoch: 7/10 Loss: 3.4379712072137285
Epoch: 7/10 Loss: 3.4046829326152803
Epoch: 7/10 Loss: 3.337872090816498
Epoch: 7/10 Loss: 3.4017954483032224
Epoch: 8/10 Loss: 3.3671753450615767
Epoch: 8/10 Loss: 3.3372667450904845
Epoch: 8/10 Loss: 3.2712749412059785
Epoch: 8/10 Loss: 3.3423393864631654
Epoch: 9/10 Loss: 3.305119448491972
Epoch: 9/10 Loss: 3.2808288826942444
Epoch: 9/10 Loss: 3.2138831961154937
Epoch: 9/10 Loss: 3.2827796609401703
Epoch: 10/10 Loss: 3.253008118394303
Epoch: 10/10 Loss: 3.22671203827858
Epoch: 10/10 Loss: 3.171477886199951
Epoch: 10/10 Loss: 3.2335320267677305
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here)when i put vocab_size equal len(vocab_to_int)hidden_dim : i select it as duble size of embaddin dinmetion for more eccuracy but not too much for not overfitting. (no best value here)n_layers :i use 3 layers as will outperform a tow layer net ( its the best value) sequence_length : i reduce for fasting the tratinig bach size: reduceing for reduce loss and raise accuracy(no best value here) learning rate :i reduce it for more eccuracy(but it shouldn’t be too large) --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:48: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_counts = Counter(text)
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
# return tuple
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token_dict = {'.': '||Period||',
',': '||Comma||',
'"': '||Quotation_Mark||',
';': '||Semicolon||',
'!': '||Exclamation_Mark||',
'?': '||Question_Mark||',
'(': '||Left_Parentheses||',
')': '||Right_Parentheses||',
'-': '||Dash||',
'\n': '||Return||'
}
return token_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
from torch import LongTensor
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
feature_tensors, target_tensors = [], []
for i in range(len(words)):
if i + sequence_length >= len(words):
break
feature_tensors.append(words[i:i+sequence_length])
target_tensors.append(words[i+sequence_length])
feature_tensors = LongTensor(feature_tensors)
target_tensors = LongTensor(target_tensors)
data = TensorDataset(feature_tensors, target_tensors)
data_loader = torch.utils.data.DataLoader(data,
batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
words = list(range(10))
iter(batch_data(words, 4, 3)).next()
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]])
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# set class variables
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
self.dropout = nn.Dropout(0.3)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
# nn_input = nn_input.long()
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
out = self.dropout(lstm_out)
out = self.fc(out)
out = out.view(batch_size, -1, self.output_size)
out = out[:, -1]
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
weight = next(self.parameters()).data
# initialize hidden state with zero weights, and move to GPU if available
if train_on_gpu:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
clip = 5
# move data to GPU, if available
if train_on_gpu:
rnn.cuda()
inp, target = inp.cuda(), target.cuda()
h = tuple([each.data for each in hidden])
rnn.zero_grad()
output, hidden = rnn(inp, h)
# perform backpropagation and optimization
loss = criterion(output, target)#.long())
loss.backward()#retain_graph=True)
nn.utils.clip_grad_norm_(rnn.parameters(), clip)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 6 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 30
# Learning Rate
learning_rate = 1e-3
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = len(vocab_to_int)
# Embedding Dimension
embedding_dim = 300
# Hidden Dimension
hidden_dim = 512
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 1000
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 30 epoch(s)...
Epoch: 1/30 Loss: 5.084580048322677
Epoch: 1/30 Loss: 4.557066640138626
Epoch: 1/30 Loss: 4.541537858009338
Epoch: 1/30 Loss: 4.419931567907334
Epoch: 1/30 Loss: 4.313605940580368
Epoch: 1/30 Loss: 4.423010891675949
Epoch: 2/30 Loss: 4.249989838815692
Epoch: 2/30 Loss: 3.9766130843162535
Epoch: 2/30 Loss: 4.07036221408844
Epoch: 2/30 Loss: 4.009245562314987
Epoch: 2/30 Loss: 3.943030775308609
Epoch: 2/30 Loss: 4.094046101808548
Epoch: 3/30 Loss: 3.998621705221367
Epoch: 3/30 Loss: 3.8033325910568236
Epoch: 3/30 Loss: 3.899204357147217
Epoch: 3/30 Loss: 3.845042642354965
Epoch: 3/30 Loss: 3.7852564511299134
Epoch: 3/30 Loss: 3.93153954911232
Epoch: 4/30 Loss: 3.8491245930179114
Epoch: 4/30 Loss: 3.6902875807285307
Epoch: 4/30 Loss: 3.7767852036952974
Epoch: 4/30 Loss: 3.7181156973838805
Epoch: 4/30 Loss: 3.6768465995788575
Epoch: 4/30 Loss: 3.8193777720928193
Epoch: 5/30 Loss: 3.7576469090337254
Epoch: 5/30 Loss: 3.6006077923774717
Epoch: 5/30 Loss: 3.678778804540634
Epoch: 5/30 Loss: 3.6325654046535494
Epoch: 5/30 Loss: 3.6017089734077454
Epoch: 5/30 Loss: 3.732928256034851
Epoch: 6/30 Loss: 3.6839546125633817
Epoch: 6/30 Loss: 3.523293900489807
Epoch: 6/30 Loss: 3.602577045440674
Epoch: 6/30 Loss: 3.5691404159069062
Epoch: 6/30 Loss: 3.536283244609833
Epoch: 6/30 Loss: 3.662341813802719
Epoch: 7/30 Loss: 3.615004067496033
Epoch: 7/30 Loss: 3.4620967202186583
Epoch: 7/30 Loss: 3.5381310691833496
Epoch: 7/30 Loss: 3.506252078294754
Epoch: 7/30 Loss: 3.479160343170166
Epoch: 7/30 Loss: 3.60372793841362
Epoch: 8/30 Loss: 3.5601333138178664
Epoch: 8/30 Loss: 3.4220535094738005
Epoch: 8/30 Loss: 3.4877538206577303
Epoch: 8/30 Loss: 3.46400217294693
Epoch: 8/30 Loss: 3.4290434563159944
Epoch: 8/30 Loss: 3.550292696714401
Epoch: 9/30 Loss: 3.516703648031999
Epoch: 9/30 Loss: 3.3777057156562806
Epoch: 9/30 Loss: 3.442356826305389
Epoch: 9/30 Loss: 3.426946301460266
Epoch: 9/30 Loss: 3.3884910480976105
Epoch: 9/30 Loss: 3.507748664855957
Epoch: 10/30 Loss: 3.478255246604621
Epoch: 10/30 Loss: 3.347915611743927
Epoch: 10/30 Loss: 3.404020783185959
Epoch: 10/30 Loss: 3.3921905906200407
Epoch: 10/30 Loss: 3.3548373084068297
Epoch: 10/30 Loss: 3.4692150990962984
Epoch: 11/30 Loss: 3.4439151168414455
Epoch: 11/30 Loss: 3.316784155368805
Epoch: 11/30 Loss: 3.374495920419693
Epoch: 11/30 Loss: 3.3694745795726777
Epoch: 11/30 Loss: 3.3234372375011443
Epoch: 11/30 Loss: 3.4330719435214996
Epoch: 12/30 Loss: 3.416563112642512
Epoch: 12/30 Loss: 3.293692454338074
Epoch: 12/30 Loss: 3.350555213212967
Epoch: 12/30 Loss: 3.346169707775116
Epoch: 12/30 Loss: 3.296534299135208
Epoch: 12/30 Loss: 3.4009602777957917
Epoch: 13/30 Loss: 3.3865375934124233
Epoch: 13/30 Loss: 3.268074172735214
Epoch: 13/30 Loss: 3.3141181790828704
Epoch: 13/30 Loss: 3.3210704760551453
Epoch: 13/30 Loss: 3.2698616359233856
Epoch: 13/30 Loss: 3.373006115436554
Epoch: 14/30 Loss: 3.3679953238999194
Epoch: 14/30 Loss: 3.244525763273239
Epoch: 14/30 Loss: 3.29574050116539
Epoch: 14/30 Loss: 3.3040728302001954
Epoch: 14/30 Loss: 3.2484118444919585
Epoch: 14/30 Loss: 3.3575907225608828
Epoch: 15/30 Loss: 3.3448792450799743
Epoch: 15/30 Loss: 3.223085729837418
Epoch: 15/30 Loss: 3.2762039811611174
Epoch: 15/30 Loss: 3.281678933620453
Epoch: 15/30 Loss: 3.22863160610199
Epoch: 15/30 Loss: 3.332759441137314
Epoch: 16/30 Loss: 3.3235383756636603
Epoch: 16/30 Loss: 3.2087944324016573
Epoch: 16/30 Loss: 3.2585117728710173
Epoch: 16/30 Loss: 3.2640167453289033
Epoch: 16/30 Loss: 3.212729797363281
Epoch: 16/30 Loss: 3.312464111804962
Epoch: 17/30 Loss: 3.3047417181285277
Epoch: 17/30 Loss: 3.196009305715561
Epoch: 17/30 Loss: 3.2426465272903444
Epoch: 17/30 Loss: 3.2462061040401458
Epoch: 17/30 Loss: 3.1951277561187745
Epoch: 17/30 Loss: 3.2954153513908384
Epoch: 18/30 Loss: 3.2933347820932943
Epoch: 18/30 Loss: 3.180704066514969
Epoch: 18/30 Loss: 3.222182501077652
Epoch: 18/30 Loss: 3.23581387090683
Epoch: 18/30 Loss: 3.1807278969287873
Epoch: 18/30 Loss: 3.2773541338443755
Epoch: 19/30 Loss: 3.275315307129937
Epoch: 19/30 Loss: 3.168750624895096
Epoch: 19/30 Loss: 3.208614781618118
Epoch: 19/30 Loss: 3.2200562987327577
Epoch: 19/30 Loss: 3.1651264278888704
Epoch: 19/30 Loss: 3.264369606733322
Epoch: 20/30 Loss: 3.2636970837991695
Epoch: 20/30 Loss: 3.1530574271678926
Epoch: 20/30 Loss: 3.2056119816303252
Epoch: 20/30 Loss: 3.214386161804199
Epoch: 20/30 Loss: 3.155025992393494
Epoch: 20/30 Loss: 3.2494461867809297
Epoch: 21/30 Loss: 3.247305773550511
Epoch: 21/30 Loss: 3.1462849230766294
Epoch: 21/30 Loss: 3.185898665189743
Epoch: 21/30 Loss: 3.2018489487171173
Epoch: 21/30 Loss: 3.1438140380382538
Epoch: 21/30 Loss: 3.236833574771881
Epoch: 22/30 Loss: 3.239420629625335
Epoch: 22/30 Loss: 3.1318942947387693
Epoch: 22/30 Loss: 3.1809025826454165
Epoch: 22/30 Loss: 3.1882017199993133
Epoch: 22/30 Loss: 3.131958144664764
Epoch: 22/30 Loss: 3.231216861486435
Epoch: 23/30 Loss: 3.2238120909812427
Epoch: 23/30 Loss: 3.1226953790187837
Epoch: 23/30 Loss: 3.1698591079711913
Epoch: 23/30 Loss: 3.1793776986598967
Epoch: 23/30 Loss: 3.119948390007019
Epoch: 23/30 Loss: 3.215026288986206
Epoch: 24/30 Loss: 3.21672063033674
Epoch: 24/30 Loss: 3.1092625601291655
Epoch: 24/30 Loss: 3.1556823587417604
Epoch: 24/30 Loss: 3.168131458520889
Epoch: 24/30 Loss: 3.10874280333519
Epoch: 24/30 Loss: 3.2112116651535034
Epoch: 25/30 Loss: 3.205730247763948
Epoch: 25/30 Loss: 3.1062616651058197
Epoch: 25/30 Loss: 3.1500544204711916
Epoch: 25/30 Loss: 3.1599951698780058
Epoch: 25/30 Loss: 3.099935098171234
Epoch: 25/30 Loss: 3.199685146570206
Epoch: 26/30 Loss: 3.1917980676256743
Epoch: 26/30 Loss: 3.096990537643433
Epoch: 26/30 Loss: 3.1379871826171875
Epoch: 26/30 Loss: 3.148208943128586
Epoch: 26/30 Loss: 3.0952279560565947
Epoch: 26/30 Loss: 3.1915375134944917
Epoch: 27/30 Loss: 3.1839449498906482
Epoch: 27/30 Loss: 3.087524526357651
Epoch: 27/30 Loss: 3.1337972748279572
Epoch: 27/30 Loss: 3.142251579761505
Epoch: 27/30 Loss: 3.0873891971111296
Epoch: 27/30 Loss: 3.1779980220794677
Epoch: 28/30 Loss: 3.174762592272083
Epoch: 28/30 Loss: 3.08094087767601
Epoch: 28/30 Loss: 3.1201308219432833
Epoch: 28/30 Loss: 3.1329622395038603
Epoch: 28/30 Loss: 3.0763747534751893
Epoch: 28/30 Loss: 3.17627747964859
Epoch: 29/30 Loss: 3.1672786248402285
Epoch: 29/30 Loss: 3.070994129657745
Epoch: 29/30 Loss: 3.1141885929107667
Epoch: 29/30 Loss: 3.1182726130485534
Epoch: 29/30 Loss: 3.070542600631714
Epoch: 29/30 Loss: 3.1658234694004057
Epoch: 30/30 Loss: 3.163955475001725
Epoch: 30/30 Loss: 3.0649259111881255
Epoch: 30/30 Loss: 3.113813718557358
Epoch: 30/30 Loss: 3.1122830049991608
Epoch: 30/30 Loss: 3.0582309074401857
Epoch: 30/30 Loss: 3.1654540483951568
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** The initial choice of the hyperparameter is from 'Character-Level LSTM in PyTorch' and 'Sentiment Analysis with an RNN.' The first analysis is on the 'sequence_length' with the following hyperparameters, and I got the minimum loss at 8.{'batch_size: 128, 'num_epochs': 10, 'learning_rate': 1e-3, 'vocab_size': len(vocab_to_int), 'output_size': len(vocab_to_int), 'embedding_dim': 300, 'hidden_dim': 256, 'n_layers': 2}
###Code
import matplotlib.pyplot as plt
analysis_data = np.array([[2, 3.874522655248642], [3, 3.8155457878112795],
[4, 3.7750240275859834], [5, 3.7593853921890257],
[6, 3.7249092943668365], [7, 3.7283163669109345],
[8, 3.7121295800209047], [12, 4.162398101806641],
[25, 4.175854583740234], [50, 4.152703736782074]])
plt.plot(analysis_data[:, 0], analysis_data[:, 1], '-o')
plt.xlabel('sequence_length')
plt.ylabel('loss')
plt.grid(True)
###Output
_____no_output_____
###Markdown
The next hyperparameter to analyze was 'hidden_dim.' The 'num_epochs' increased to 30, and I eventually got the loss of less than 3.5 with 'hidden_dim' value of 512.{'sequence_length': 8, 'batch_size: 128, 'num_epochs': 30, 'learning_rate': 1e-3, 'vocab_size': len(vocab_to_int), 'output_size': len(vocab_to_int), 'embedding_dim': 300, 'n_layers': 2}
###Code
import matplotlib.pyplot as plt
analysis_data = np.array([[128, 3.818365662574768], [256, 3.5466213097572328],
[512, 3.1654540483951568]])
plt.plot(analysis_data[:, 0], analysis_data[:, 1], '-o')
plt.xlabel('hidden_dim')
plt.ylabel('loss')
plt.grid(True)
###Output
_____no_output_____
###Markdown
--- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
jerry: you know, if i have to get the feeling to you, and i don't even know how much this is going to cost the tour?
george: i don't think you have a problem.
jerry: i thought you said you'd be a lot better than that.
jerry: what are you doing here?
elaine:(sarcastic) oh yeah, yeah.
kramer: well, that's the way i can go back to the airport.
jerry:(sarcastically) oh, that's a good idea.
george: what?
jerry:(leaving) i thought you said i was in love with him.
george: well, it's the damnedest thing. it's a big deal to the beach.
jerry: you think i was wrong?
kramer: yeah, well...
jerry: oh! i think i have the best of your car and the rest of the movie) oh my god!
[setting: george's apartment building]
jerry: oh, i got it.
jerry:(pleading) i think you should do it.
george: what?
elaine: oh!
[setting: jerry's apartment]
kramer: oh no, no.
jerry: what is that?
jerry: yeah, i know.
newman: i don't understand......
george: what are you doing?
kramer:(entering monk's) oh, no. i got to get a new car?
kramer: well, you know what i'm thinking?
jerry: yeah.
george: i thought you said i was just going to have to be able to be a bit.
george:(sarcastic) oh, hi. jerry.
jerry: oh, yeah.
george: well...(mutters)
kramer: hey, hey! hey!
kramer: hey, jerry.
george: i know. i can't get that image on you.
kramer: well, i think it's a good idea, i
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
# For better debugging
import os
os.environ['CUDA_LAUNCH_BLOCKING'] = "1"
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (11, 50)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 11 to 50:
george: (on an imaginary microphone) uh, no, not at this time.
jerry: well, senator, id just like to know, what you knew and when you knew it.
claire: mr. seinfeld. mr. costanza.
george: are, are you sure this is decaf? wheres the orange indicator?
claire: its missing, i have to do it in my head decaf left, regular right, decaf left, regular right...its very challenging work.
jerry: can you relax, its a cup of coffee. claire is a professional waitress.
claire: trust me george. no one has any interest in seeing you on caffeine.
george: how come youre not doing the second show tomorrow?
jerry: well, theres this uh, woman might be coming in.
george: wait a second, wait a second, what coming in, what woman is coming in?
jerry: i told you about laura, the girl i met in michigan?
george: no, you didnt!
jerry: i thought i told you about it, yes, she teaches political science? i met her the night i did the show in lansing...
george: ha.
jerry: (looks in the creamer) theres no milk in here, what...
george: wait wait wait, what is she... (takes the milk can from jerry and puts it on the table) what is she like?
jerry: oh, shes really great. i mean, shes got like a real warmth about her and shes really bright and really pretty and uh... the conversation though, i mean, it was... talking with her is like talking with you, but, you know, obviously much better.
george: (smiling) so, you know, what, what happened?
jerry: oh, nothing happened, you know, but is was great.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
vocab_to_int = {}
for idx, word in enumerate(set(text)):
if word not in vocab_to_int:
vocab_to_int[word] = int(idx)
int_to_vocab = {v: k for k, v in vocab_to_int.items()}
print(len(vocab_to_int))
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
71
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return {
".": "||Period||",
",": "||Comma||",
"\"": "||Quotation_Mark||",
";": "||Semicolon||",
"!": "||Exclamation_Mark||",
"?": "||Question_Mark||",
"(": "||Left_Parentheses||",
")": "||Right_Parentheses||",
"-": "||Dash||",
"\n": "||Return||",
}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
21388
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
feature_tensors = []
target_tensors = []
n_words = len(words)
for i in range(n_words):
if i + sequence_length < n_words:
feature_tensors.append(words[i:i + sequence_length])
target_tensors.append(words[i + sequence_length])
else:
break
data = TensorDataset(torch.LongTensor(feature_tensors), torch.LongTensor(target_tensors))
return DataLoader(data, batch_size=batch_size, shuffle=True)
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
print(batch_data([1,2,3,4,5,6,7,8,9,0], 3, 7))
###Output
<torch.utils.data.dataloader.DataLoader object at 0x7f6c1a1b3978>
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 8, 9, 10, 11, 12],
[34, 35, 36, 37, 38],
[ 5, 6, 7, 8, 9],
[38, 39, 40, 41, 42],
[26, 27, 28, 29, 30],
[29, 30, 31, 32, 33],
[11, 12, 13, 14, 15],
[25, 26, 27, 28, 29],
[21, 22, 23, 24, 25],
[30, 31, 32, 33, 34]])
torch.Size([10])
tensor([13, 39, 10, 43, 31, 34, 16, 30, 26, 35])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# set class variables
self.n_layers = n_layers
self.output_size = output_size
self.hidden_dim = hidden_dim
self.word_embeddings = nn.Embedding(vocab_size, embedding_dim)
# define model layers
self.lstm_layer = nn.LSTM(
embedding_dim,
hidden_dim,
num_layers=n_layers,
dropout=dropout,
batch_first=True
)
self.output_layer = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
embeds = self.word_embeddings(nn_input)
lstm_out, hidden = self.lstm_layer(embeds, hidden)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
batch_size = nn_input.size(0)
output = self.output_layer(lstm_out)
output = output.view(batch_size, -1, self.output_size)
last_batch = output[:, -1]
return last_batch, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if train_on_gpu:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
if train_on_gpu:
inp, target = inp.cuda(), target.cuda()
h = tuple([x.data for x in hidden])
rnn.zero_grad()
output, h = rnn(inp, hidden)
loss = criterion(output, target)
loss.backward()
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
return loss.item(), h
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
def repackage_hidden(h):
"""Wraps hidden states in new Tensors, to detach them from their history."""
if isinstance(h, torch.Tensor):
return h.detach()
else:
return tuple(repackage_hidden(v) for v in h)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
hidden = repackage_hidden(hidden)
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 8 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 5.515188800811767
Epoch: 1/10 Loss: 4.894205183506012
Epoch: 1/10 Loss: 4.680005712985992
Epoch: 1/10 Loss: 4.556370007038116
Epoch: 1/10 Loss: 4.45140567445755
Epoch: 1/10 Loss: 4.404972780227661
Epoch: 1/10 Loss: 4.32517939043045
Epoch: 1/10 Loss: 4.31841893196106
Epoch: 1/10 Loss: 4.265971234798432
Epoch: 1/10 Loss: 4.246224465847015
Epoch: 1/10 Loss: 4.212311903953553
Epoch: 1/10 Loss: 4.201020967483521
Epoch: 1/10 Loss: 4.170561553001404
Epoch: 2/10 Loss: 4.055712251348269
Epoch: 2/10 Loss: 3.9973594183921812
Epoch: 2/10 Loss: 3.986263379096985
Epoch: 2/10 Loss: 3.9635803208351135
Epoch: 2/10 Loss: 3.9599059619903563
Epoch: 2/10 Loss: 3.94514505815506
Epoch: 2/10 Loss: 3.935329864501953
Epoch: 2/10 Loss: 3.938438714504242
Epoch: 2/10 Loss: 3.942914291381836
Epoch: 2/10 Loss: 3.952489317417145
Epoch: 2/10 Loss: 3.910309956073761
Epoch: 2/10 Loss: 3.913070430278778
Epoch: 2/10 Loss: 3.931608271598816
Epoch: 3/10 Loss: 3.8426282580545936
Epoch: 3/10 Loss: 3.7449702105522156
Epoch: 3/10 Loss: 3.7627976350784302
Epoch: 3/10 Loss: 3.7916578917503356
Epoch: 3/10 Loss: 3.7599433608055115
Epoch: 3/10 Loss: 3.754775415420532
Epoch: 3/10 Loss: 3.7655256996154787
Epoch: 3/10 Loss: 3.7686352763175965
Epoch: 3/10 Loss: 3.766996611595154
Epoch: 3/10 Loss: 3.767058864116669
Epoch: 3/10 Loss: 3.780586408138275
Epoch: 3/10 Loss: 3.773424741744995
Epoch: 3/10 Loss: 3.7779603548049927
Epoch: 4/10 Loss: 3.705124765480758
Epoch: 4/10 Loss: 3.6273978176116946
Epoch: 4/10 Loss: 3.616773958206177
Epoch: 4/10 Loss: 3.6674270362854005
Epoch: 4/10 Loss: 3.6360745553970335
Epoch: 4/10 Loss: 3.6589384570121766
Epoch: 4/10 Loss: 3.6663544912338257
Epoch: 4/10 Loss: 3.6454354720115663
Epoch: 4/10 Loss: 3.6662793803215026
Epoch: 4/10 Loss: 3.665962187767029
Epoch: 4/10 Loss: 3.6781682834625244
Epoch: 4/10 Loss: 3.676956202983856
Epoch: 4/10 Loss: 3.6632711391448973
Epoch: 5/10 Loss: 3.60518525849924
Epoch: 5/10 Loss: 3.5266586937904356
Epoch: 5/10 Loss: 3.5491776027679443
Epoch: 5/10 Loss: 3.546335247039795
Epoch: 5/10 Loss: 3.5509201469421385
Epoch: 5/10 Loss: 3.558977521896362
Epoch: 5/10 Loss: 3.5540020489692687
Epoch: 5/10 Loss: 3.589643236160278
Epoch: 5/10 Loss: 3.5868148427009583
Epoch: 5/10 Loss: 3.586656662464142
Epoch: 5/10 Loss: 3.584798780441284
Epoch: 5/10 Loss: 3.6107240200042723
Epoch: 5/10 Loss: 3.6055909996032716
Epoch: 6/10 Loss: 3.5303016454442737
Epoch: 6/10 Loss: 3.464726659297943
Epoch: 6/10 Loss: 3.4647241320610047
Epoch: 6/10 Loss: 3.4753291630744934
Epoch: 6/10 Loss: 3.470268227100372
Epoch: 6/10 Loss: 3.496699033737183
Epoch: 6/10 Loss: 3.4944398765563967
Epoch: 6/10 Loss: 3.519511894226074
Epoch: 6/10 Loss: 3.505395049571991
Epoch: 6/10 Loss: 3.520570902347565
Epoch: 6/10 Loss: 3.5238463711738586
Epoch: 6/10 Loss: 3.541940945625305
Epoch: 6/10 Loss: 3.5522071013450622
Epoch: 7/10 Loss: 3.470562788232069
Epoch: 7/10 Loss: 3.403359667301178
Epoch: 7/10 Loss: 3.4054346995353697
Epoch: 7/10 Loss: 3.414729241371155
Epoch: 7/10 Loss: 3.425702956676483
Epoch: 7/10 Loss: 3.4432071204185486
Epoch: 7/10 Loss: 3.446086753845215
Epoch: 7/10 Loss: 3.4525076165199278
Epoch: 7/10 Loss: 3.459822273731232
Epoch: 7/10 Loss: 3.4869738287925722
Epoch: 7/10 Loss: 3.4988680334091184
Epoch: 7/10 Loss: 3.496957610607147
Epoch: 7/10 Loss: 3.4960529704093934
Epoch: 8/10 Loss: 3.42372932498054
Epoch: 8/10 Loss: 3.3471236605644226
Epoch: 8/10 Loss: 3.3584524660110473
Epoch: 8/10 Loss: 3.3678841986656187
Epoch: 8/10 Loss: 3.391036041736603
Epoch: 8/10 Loss: 3.3893140263557435
Epoch: 8/10 Loss: 3.411256212234497
Epoch: 8/10 Loss: 3.4137536835670472
Epoch: 8/10 Loss: 3.409756651878357
Epoch: 8/10 Loss: 3.4282371163368226
Epoch: 8/10 Loss: 3.4518254375457764
Epoch: 8/10 Loss: 3.4454548215866088
Epoch: 8/10 Loss: 3.460259379863739
Epoch: 9/10 Loss: 3.3755362506252324
Epoch: 9/10 Loss: 3.306753198623657
Epoch: 9/10 Loss: 3.3166620497703554
Epoch: 9/10 Loss: 3.327660037994385
Epoch: 9/10 Loss: 3.3482672100067137
Epoch: 9/10 Loss: 3.3388496203422546
Epoch: 9/10 Loss: 3.381403299331665
Epoch: 9/10 Loss: 3.3726201076507567
Epoch: 9/10 Loss: 3.36599197101593
Epoch: 9/10 Loss: 3.4029546031951905
Epoch: 9/10 Loss: 3.4057063155174254
Epoch: 9/10 Loss: 3.4091373586654665
Epoch: 9/10 Loss: 3.4280067586898806
Epoch: 10/10 Loss: 3.3318529320944203
Epoch: 10/10 Loss: 3.2924032731056214
Epoch: 10/10 Loss: 3.275689582824707
Epoch: 10/10 Loss: 3.3046683926582334
Epoch: 10/10 Loss: 3.3120004024505616
Epoch: 10/10 Loss: 3.3203927888870237
Epoch: 10/10 Loss: 3.330588330745697
Epoch: 10/10 Loss: 3.3430827460289003
Epoch: 10/10 Loss: 3.3565797243118287
Epoch: 10/10 Loss: 3.3760718536376952
Epoch: 10/10 Loss: 3.368762094974518
Epoch: 10/10 Loss: 3.3841377515792845
Epoch: 10/10 Loss: 3.3717826838493345
Model Trained and Saved
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:**We started with hyperparameters that were too huge, thus the learning process was really slow and did not converge after 4 epochs. Through several attempts, we decreased/increased the following hyperparameters as followed: - epochs: from 4 to 10- sequence_length: from 10 to 8 - batch size: from 256 to 128 - embedding dim: from 300 to 200 - hidden dim: from 512 to 256 With these final hyperparameters, we finally obtained convergence with a loss under 3.5. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
jerry:
kramer: well, it's not a good thing.
george:(to kramer) i think you're going to be a good guy.
jerry: i don't know, i don't know.
george:(to himself) you got the car!
jerry: i know. i mean, i have no idea that i was doing the same exercises, i can't believe it.
george:(to kramer) you know, it's a little problem.
elaine: what?
kramer: no, no. no. i don't have to do it, but i have a lot of money.
elaine: what is the point of the exterminator?
jerry: well, i got the job.
elaine: oh, yeah.
jerry: well, i got a call.
kramer: well, what do you think? i mean, i have a little nervous.
jerry:(looking at jerry) you can't believe it. you know what you're doing here?
elaine:(from a very accent) i mean, i just want to go to the bathroom.
jerry: oh, you know, i'm sorry, i don't think so. i don't know what happened. you can do this.(jerry looks at the woman in the air and walks away) i don't want to hear this!
elaine:(looking at his watch) oh, no!
kramer: well, you know, i think i should be going to do that.
elaine: what do you mean, that i'm a good idea for you.
jerry:(to george) i don't want you chuckle.
kramer: well, i'm going to the hospital, and you didn't get it.
jerry: you know what? i just got a little depressed.
george:(looking around, and starts dancing back)
kramer: oh, no. i just want the tape.
jerry: you can't do it?
george:(from the phone) oh my
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_counts = Counter(text)
vocab = sorted(word_counts, key=word_counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab)}
int_to_vocab = {ii:word for ii, word in enumerate(vocab)}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
from string import punctuation
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token_lookup = {'.' : '<PERIOD>',
',' : '<COMMA>',
'"' : '<QUOTATION_MARK>',
';' : '<SEMICOLON>',
'!' : '<EXCLAMATION_MARK>',
'?' : '<QUESTION_MARK>',
'(' : '<LEFT_PAREN>',
')' : '<RIGHT_PAREN>',
'?' : '<QUESTION_MARK>',
'-' : '<DASH>',
'\n' : '<NEW_LINE>' }
return token_lookup
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
n_batches = len(words)//batch_size
words = words[:n_batches*batch_size]
features = []
targets = []
for idx in range(0, len(words) - sequence_length):
features.append(words[idx : idx + sequence_length])
targets.append(words[idx + sequence_length])
data = TensorDataset(torch.from_numpy(np.asarray(features)), torch.from_numpy(np.asarray(targets)))
data_loader = torch.utils.data.DataLoader(data, shuffle=False , batch_size = batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]])
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
out = self.fc(lstm_out)
out = out.view(batch_size, -1, self.output_size)
# return one batch of output word scores and the hidden state
output = out[:, -1]
return output, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
weight = next(self.parameters()).data
# initialize hidden state with zero weights, and move to GPU if available
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if(train_on_gpu):
rnn.cuda()
# perform backpropagation and optimization
h = tuple([each.data for each in hidden])
rnn.zero_grad()
if(train_on_gpu):
inp, target = inp.cuda(), target.cuda()
output, h = rnn(inp, h)
loss = criterion(output, target)
loss.backward()
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
_loss = loss.item()
# return the loss over a batch and the hidden state produced by our model
return _loss, h
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 8
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 8 epoch(s)...
Epoch: 1/8 Loss: 5.5322976059913636
Epoch: 1/8 Loss: 4.905174459934234
Epoch: 1/8 Loss: 4.659348671913147
Epoch: 1/8 Loss: 4.530960459232331
Epoch: 1/8 Loss: 4.525175276279449
Epoch: 1/8 Loss: 4.560917906284332
Epoch: 1/8 Loss: 4.4576641521453855
Epoch: 1/8 Loss: 4.33813676404953
Epoch: 1/8 Loss: 4.309417385578156
Epoch: 1/8 Loss: 4.248244787693023
Epoch: 1/8 Loss: 4.361484883785248
Epoch: 1/8 Loss: 4.380487771987915
Epoch: 1/8 Loss: 4.385624749183655
Epoch: 2/8 Loss: 4.176683894366272
Epoch: 2/8 Loss: 4.009177144527436
Epoch: 2/8 Loss: 3.9248783888816834
Epoch: 2/8 Loss: 3.8828868799209593
Epoch: 2/8 Loss: 3.9278798971176148
Epoch: 2/8 Loss: 4.003253318786621
Epoch: 2/8 Loss: 3.9587276787757872
Epoch: 2/8 Loss: 3.8582862396240234
Epoch: 2/8 Loss: 3.8608912954330443
Epoch: 2/8 Loss: 3.8152741413116456
Epoch: 2/8 Loss: 3.9530254421234132
Epoch: 2/8 Loss: 3.94459757900238
Epoch: 2/8 Loss: 3.936078099727631
Epoch: 3/8 Loss: 3.8280178379913994
Epoch: 3/8 Loss: 3.7554098348617555
Epoch: 3/8 Loss: 3.6783560771942136
Epoch: 3/8 Loss: 3.6489162855148316
Epoch: 3/8 Loss: 3.6808696751594545
Epoch: 3/8 Loss: 3.7744146695137024
Epoch: 3/8 Loss: 3.74846625995636
Epoch: 3/8 Loss: 3.6554874119758605
Epoch: 3/8 Loss: 3.6674591946601867
Epoch: 3/8 Loss: 3.62701020860672
Epoch: 3/8 Loss: 3.7350317368507384
Epoch: 3/8 Loss: 3.749186454296112
Epoch: 3/8 Loss: 3.734345266819
Epoch: 4/8 Loss: 3.6477463097611733
Epoch: 4/8 Loss: 3.602082010746002
Epoch: 4/8 Loss: 3.536734944343567
Epoch: 4/8 Loss: 3.5097719435691834
Epoch: 4/8 Loss: 3.5255525641441343
Epoch: 4/8 Loss: 3.6225003170967103
Epoch: 4/8 Loss: 3.616239989757538
Epoch: 4/8 Loss: 3.5258395104408264
Epoch: 4/8 Loss: 3.497856577396393
Epoch: 4/8 Loss: 3.4795735268592836
Epoch: 4/8 Loss: 3.588129153728485
Epoch: 4/8 Loss: 3.6102542405128477
Epoch: 4/8 Loss: 3.591351550579071
Epoch: 5/8 Loss: 3.523304050865252
Epoch: 5/8 Loss: 3.4892584409713745
Epoch: 5/8 Loss: 3.4241340975761414
Epoch: 5/8 Loss: 3.412660810470581
Epoch: 5/8 Loss: 3.4268025426864623
Epoch: 5/8 Loss: 3.516488340854645
Epoch: 5/8 Loss: 3.520873282909393
Epoch: 5/8 Loss: 3.4241799364089966
Epoch: 5/8 Loss: 3.4138887639045716
Epoch: 5/8 Loss: 3.3838316583633423
Epoch: 5/8 Loss: 3.5008965816497803
Epoch: 5/8 Loss: 3.5092281589508056
Epoch: 5/8 Loss: 3.497810969829559
Epoch: 6/8 Loss: 3.4379875421031447
Epoch: 6/8 Loss: 3.411441514968872
Epoch: 6/8 Loss: 3.3444636688232423
Epoch: 6/8 Loss: 3.3374139881134033
Epoch: 6/8 Loss: 3.346661656856537
Epoch: 6/8 Loss: 3.440621413230896
Epoch: 6/8 Loss: 3.4377436027526858
Epoch: 6/8 Loss: 3.346252854347229
Epoch: 6/8 Loss: 3.324707386493683
Epoch: 6/8 Loss: 3.3005346908569337
Epoch: 6/8 Loss: 3.408595465183258
Epoch: 6/8 Loss: 3.4262085876464843
Epoch: 6/8 Loss: 3.4145790710449218
Epoch: 7/8 Loss: 3.369618340710963
Epoch: 7/8 Loss: 3.3499011301994326
Epoch: 7/8 Loss: 3.27999951171875
Epoch: 7/8 Loss: 3.287960659503937
Epoch: 7/8 Loss: 3.293981553077698
Epoch: 7/8 Loss: 3.382100666999817
Epoch: 7/8 Loss: 3.378078860759735
Epoch: 7/8 Loss: 3.2818254389762878
Epoch: 7/8 Loss: 3.2670616817474367
Epoch: 7/8 Loss: 3.2399463267326354
Epoch: 7/8 Loss: 3.339293287754059
Epoch: 7/8 Loss: 3.3616325244903567
Epoch: 7/8 Loss: 3.3500450859069826
Epoch: 8/8 Loss: 3.3117458441040735
Epoch: 8/8 Loss: 3.3048712882995606
Epoch: 8/8 Loss: 3.231137599468231
Epoch: 8/8 Loss: 3.2351998553276062
Epoch: 8/8 Loss: 3.2364736914634706
Epoch: 8/8 Loss: 3.3298539748191835
Epoch: 8/8 Loss: 3.3275475611686707
Epoch: 8/8 Loss: 3.237054452896118
Epoch: 8/8 Loss: 3.2113740792274474
Epoch: 8/8 Loss: 3.198173487186432
Epoch: 8/8 Loss: 3.293269034385681
Epoch: 8/8 Loss: 3.316383964061737
Epoch: 8/8 Loss: 3.302833655357361
Model Trained and Saved
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here)- As mentioned in course videos, it is good to have big batch size unless facing performance issues and I have tried with 64 and 128. So, I saw that 128 is better for batch size. Also, I googled this kind of projects and their hyperparameters and saw that 10 +-2 has a good performance on seq_length. - I have used 0.01 as learning rate for starting but I saw that loss values were changing up and down too much and decided to make it small.- I have checked other project and solutions that we made during RNN course and thought that 200 embedding_dim and 256 hidden_dim seems pretty fine.- I have added 2 layer to create complexity but not too much.- I have trained my model several times with num_epochs over 10 and I saw that after 7th epoch it is not changing too much. So, I used 8 since it was enough and time saver. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:37: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
_____no_output_____
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
# return tuple
return (None, None)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
_____no_output_____
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
_____no_output_____
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# return a dataloader
return None
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
_____no_output_____
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
# define model layers
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# return one batch of output word scores and the hidden state
return None, None
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
_____no_output_____
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
# perform backpropagation and optimization
# return the loss over a batch and the hidden state produced by our model
return None, None
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
_____no_output_____
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = # of words in a sequence
# Batch Size
batch_size =
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs =
# Learning Rate
learning_rate =
# Model parameters
# Vocab size
vocab_size =
# Output size
output_size =
# Embedding Dimension
embedding_dim =
# Hidden Dimension
hidden_dim =
# Number of RNN Layers
n_layers =
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
_____no_output_____
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
_____no_output_____
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (2, 12)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 2 to 12:
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
jerry: oh, you dont recall?
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
#reference source: inspired/copied from course samples
word_counts = Counter(text)
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
# return tuple
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
retval = {
".": "||Period||",
",": "||Comma||",
"\"": "||QuotationMark||",
";": "||Semicolon||",
"!": "||ExclamationMark||",
"?": "||QuestionMark||",
"(": "||LeftParentheses||",
")": "||RightParentheses||",
"-": "||Dash||",
"\n": "||Return||",
}
return retval
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
len(int_text)
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
nb_samples = 6
features = torch.randn(nb_samples, 10)
labels = torch.empty(nb_samples, dtype=torch.long).random_(10)
dataset = TensorDataset(features, labels)
loader = DataLoader(
dataset,
batch_size=2
)
for batch_idx, (x, y) in enumerate(loader):
print(x.shape, y.shape)
print(features)
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
batch = len(words)//batch_size
words = words[:batch*batch_size]
feature_tensors, target_tensors = [], []
for ndx in range(len(words) - sequence_length):
feature_tensors += [words[ndx:ndx+sequence_length]]
target_tensors += [words[ndx+sequence_length]]
feature_tensors = torch.LongTensor(feature_tensors)
target_tensors = torch.LongTensor(target_tensors)
data = TensorDataset(feature_tensors, target_tensors)
data_loader = torch.utils.data.DataLoader(data,
batch_size=batch_size,
shuffle=True
)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=6, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 6])
tensor([[ 13, 14, 15, 16, 17, 18],
[ 20, 21, 22, 23, 24, 25],
[ 30, 31, 32, 33, 34, 35],
[ 2, 3, 4, 5, 6, 7],
[ 16, 17, 18, 19, 20, 21],
[ 24, 25, 26, 27, 28, 29],
[ 0, 1, 2, 3, 4, 5],
[ 38, 39, 40, 41, 42, 43],
[ 7, 8, 9, 10, 11, 12],
[ 18, 19, 20, 21, 22, 23]])
torch.Size([10])
tensor([ 19, 26, 36, 8, 22, 30, 6, 44, 13, 24])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
#reference source: inspired/copied from course samples
import numpy as np
def one_hot_encode(arr, n_labels):
arr = arr.cpu().numpy()
# Initialize the the encoded array
one_hot = np.zeros((np.multiply(*arr.shape), n_labels), dtype=np.float32)
# Fill the appropriate elements with ones
one_hot[np.arange(one_hot.shape[0]), arr.flatten()] = 1.
# Finally reshape it to get back to the original array
one_hot = one_hot.reshape((*arr.shape, n_labels))
if(train_on_gpu):
return torch.from_numpy(one_hot).cuda()
else:
return torch.from_numpy(one_hot)
# check that the function works as expected
test_seq = np.array([[3, 5, 1]])
test_seq = torch.from_numpy(test_seq)
print(test_seq)
one_hot = one_hot_encode(test_seq, 8)
print(one_hot)
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them a
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.input_dim = vocab_size
self.hidden_dim = hidden_dim
self.output_dim = output_size
self.n_layers = n_layers
self.dropout_prob = dropout
self.embedding_dim = embedding_dim
## define model layers
self.embed = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, self.hidden_dim, self.n_layers,
dropout=self.dropout_prob, batch_first=True)
self.dropout = nn.Dropout(dropout)
#final fully connected
self.fc = nn.Linear(self.hidden_dim, self.output_dim)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# ## outputs and the new hidden state
# nn_input = one_hot_encode(nn_input, self.input_dim)
embedding = self.embed(nn_input)
lstm_output, hidden = self.lstm(embedding, hidden)
# lstm_output, hidden = self.lstm(nn_input, hidden) #without embedding
out = self.dropout(lstm_output)
#stack the outputs of the lstm to pass to your fully-connected layer
out = out.contiguous().view(-1, self.hidden_dim)
out = self.fc(out)
##From notes above
#The output of this model should be the last batch of word scores after a complete sequence has been processed.
#That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word.
# reshape into (batch_size, seq_length, output_size)
out = out.view(self.batch_size, -1, self.output_dim)
# get last batch
out = out[:, -1]
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
self.batch_size = batch_size
weight = next(self.parameters()).data
# two new tensors with sizes n_layers x batch_size x n_hidden
# initialize hidden state with zero weights, and move to GPU if available
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
#one hot encoding?
#required for non embeded case only
# zero accumulated gradients
rnn.zero_grad()
#To avoid retain_graph=True, inspired from course discussions
hidden = (hidden[0].detach(), hidden[1].detach())
# move data to GPU, if available
if(train_on_gpu):
inp = inp.cuda()
target = target.cuda()
output, hidden = rnn(inp, hidden)
loss = criterion(output, target) #target.view(batch_size*sequence_length)
# perform backpropagation and optimization
# loss.backward(retain_graph=True) #Removed due to high resource consumption
loss.backward()
##did not get any advantage
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
# nn.utils.clip_grad_norm_(rnn.parameters(), clip) ?
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s), %d batch size, %d show every..." % (n_epochs, batch_size, show_every_n_batches))
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
#modified version with detailed printing, global loss for loaded network (rnn), and saving network
def train_rnn_copy(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100, myGlobalLoss=10):
batch_losses = []
rnn.train()
print("Training for %d epoch(s), %d batch size, show every %d, global loss %.4f..."
% (n_epochs, batch_size, show_every_n_batches, myGlobalLoss))
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
avgLoss = np.average(batch_losses)
print('Epoch: {:>4}/{:<4} Batch: {:>4}/{:<4} Loss: {}'.format(
epoch_i, n_epochs, batch_i, n_batches, np.average(batch_losses)))
batch_losses = []
if(myGlobalLoss > avgLoss):
print('Global Loss {} ---> {}. Saving...'.format(myGlobalLoss, avgLoss))
myGlobalLoss = avgLoss
#saved at batch level for quick testing and restart
#should be moved to epoch level to avoid saving semi-trained network
helper.save_model('./save/trained_rnn_mid_we', rnn)
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length, # of words in a sequence
sequence_length = 10
# Batch Size
if(train_on_gpu):
batch_size = 512 #128 #64
else:
batch_size = 5
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
myGlobalLoss = 5
myDropout = 0.5 #0.8
# Number of Epochs
num_epochs = 10 #5 #50
# Learning Rate
learning_rate = 0.001 #0.002 #0.005 #0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)+1
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 300 #256 #200
# Hidden Dimension, Usually larger is better performance wise. Common values are 128, 256, 512,
hidden_dim = 512 #256
# Number of RNN Layers, Typically between 1-3
n_layers = 2
# Show stats for every n number of batches
if(train_on_gpu):
show_every_n_batches = 200
else:
show_every_n_batches = 1
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
#for debugging purposes
# import os
# os.environ['CUDA_LAUNCH_BLOCKING'] = "1"
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=myDropout)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
try:
rnn = helper.load_model('./save/trained_rnn_mid_we')
print("loaded mid save model")
except:
try:
rnn = helper.load_model('./save/trained_rnn')
print("failed mid save.. loaded global model")
except:
print("could not load any model")
finally:
print(rnn)
# training the model
trained_rnn = train_rnn_copy(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches, myGlobalLoss)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
could not load any model
RNN(
(dropout): Dropout(p=0.5)
(embed): Embedding(21389, 300)
(lstm): LSTM(300, 512, num_layers=2, batch_first=True, dropout=0.5)
(fc): Linear(in_features=512, out_features=21389, bias=True)
)
Training for 10 epoch(s), 512 batch size, show every 200, global loss 5.0000...
Epoch: 1/10 Batch: 200/1741 Loss: 5.5300157618522645
Epoch: 1/10 Batch: 400/1741 Loss: 4.861690397262573
Global Loss 5 ---> 4.861690397262573. Saving...
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here)- Tried with multiple combinations of hyperparameters to get optimum results. - sequence_length: Tried different sequence lengths between 5-30. Higher sequence lengths took more time to train. Therefore, used 10 which gave satisfactory results.- batch size: Higher batch size resulted in better results. Due to GPU memory limitations used 512 with embedding. When tried without embedding, the maximum size (again due to memory limitation) was 128- embedding layer: To begin with, for experimentation purposes, did not use embedding. Later, when the embedding was used memory and time seedup were recorded.- learning rate: Tried different leanring rates. During initial investigations, higher learning rates ~0.01 did not converge well to a satisfactory solution. Also, tried decreaing learning rate (manually) after a few epoches to see marginal improvements. Then tried between 0.001 to 0.0005. 0.001 gave the best results. Therefore, used the same.- hidden dim: Increasing hidden dim decreased loss. But, due to memory limitations used 512- n_layers: A value between 1-3 is recommended. 2 was a good choice and gave good results. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:51: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
words = list(set(text))
vocab_to_int = {word: i for i, word in enumerate(words)}
int_to_vocab = {i: word for i, word in enumerate(words)}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token_list = [['Period', '.'],
['Comma', ','],
['Quotation_Mark', '"'],
['Semicolon', ';'],
['Exclamation_mark','!'],
['Question_mark','?'],
['Left_Parentheses','('],
['Right_Parentheses',')'],
['Dash','-'],
['Return','\n']]
token_dict = {char:f"||{token}||" for token, char in token_list}
return token_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
features = []
targets = []
num_sequences = len(words) - sequence_length - 1
for start_i in range(num_sequences):
features.append(words[start_i : start_i + sequence_length])
targets.append(words[start_i + sequence_length])
data = TensorDataset(torch.tensor(features, dtype=torch.long), torch.tensor(targets, dtype=torch.long))
dataloader = DataLoader(data, batch_size=batch_size, shuffle=True)
# return a dataloader
return dataloader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
batch_data(int_text, 4, 128)
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 4, 5, 6, 7, 8],
[35, 36, 37, 38, 39],
[34, 35, 36, 37, 38],
[ 1, 2, 3, 4, 5],
[17, 18, 19, 20, 21],
[39, 40, 41, 42, 43],
[31, 32, 33, 34, 35],
[37, 38, 39, 40, 41],
[27, 28, 29, 30, 31]])
torch.Size([10])
tensor([ 5, 9, 40, 39, 6, 22, 44, 36, 42, 32])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.vocab_size = vocab_size
self.output_size = output_size
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.n_layers = n_layers
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.embedding.weight.data.uniform_(-1, 1)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
self.dropout = nn.Dropout(dropout)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, x, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = x.shape[0]
x = x.long()
embed = self.embedding(x)
raw_out, hidden = self.lstm(embed, hidden)
out = raw_out.contiguous().view(-1, self.hidden_dim)
out = self.dropout(raw_out)
out = self.fc(out)
out = out.view(batch_size, -1, self.output_size)
out = out[:, -1]
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
w = next(self.parameters()).data
def get_hidden_w(weight, has_gpu):
hidden_w = weight.new(self.n_layers, batch_size, self.hidden_dim).zero_()
if has_gpu:
hidden_w = hidden_w.cuda()
return hidden_w
hidden = (get_hidden_w(w, train_on_gpu), get_hidden_w(w, train_on_gpu))
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if train_on_gpu:
inp, target = inp.cuda(), target.cuda()
rnn.cuda()
# perform backpropagation and optimization
hidden = tuple([each.data for each in hidden])
rnn.zero_grad()
out, hidden = rnn(inp, hidden)
loss = criterion(out, target.long())
loss.backward()
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
counter = 0
rnn.train()
n_batches = len(train_loader.dataset)//batch_size
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
counter += 1
# make sure you iterate over completely full batches, only
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if counter % show_every_n_batches == 0:
print('Loss: {:.3f} Epoch progess {:.0f}%... Epoch: {:>4}/{:<4} '
.format(
np.average(batch_losses),
batch_i/n_batches* 100,
epoch_i,
n_epochs )
)
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 15 # of words in a sequence
# Batch Size
batch_size = 1024
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
## Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 400
# Hidden Dimension
hidden_dim = 512
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 50
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Loss: 6.475 Epoch progess 6%... Epoch: 1/10
Loss: 5.581 Epoch progess 11%... Epoch: 1/10
Loss: 5.271 Epoch progess 17%... Epoch: 1/10
Loss: 5.060 Epoch progess 23%... Epoch: 1/10
Loss: 4.975 Epoch progess 29%... Epoch: 1/10
Loss: 4.867 Epoch progess 34%... Epoch: 1/10
Loss: 4.761 Epoch progess 40%... Epoch: 1/10
Loss: 4.715 Epoch progess 46%... Epoch: 1/10
Loss: 4.659 Epoch progess 52%... Epoch: 1/10
Loss: 4.605 Epoch progess 57%... Epoch: 1/10
Loss: 4.575 Epoch progess 63%... Epoch: 1/10
Loss: 4.504 Epoch progess 69%... Epoch: 1/10
Loss: 4.499 Epoch progess 75%... Epoch: 1/10
Loss: 4.465 Epoch progess 80%... Epoch: 1/10
Loss: 4.415 Epoch progess 86%... Epoch: 1/10
Loss: 4.415 Epoch progess 92%... Epoch: 1/10
Loss: 4.379 Epoch progess 98%... Epoch: 1/10
Loss: 4.324 Epoch progess 3%... Epoch: 2/10
Loss: 4.285 Epoch progess 9%... Epoch: 2/10
Loss: 4.278 Epoch progess 15%... Epoch: 2/10
Loss: 4.241 Epoch progess 20%... Epoch: 2/10
Loss: 4.245 Epoch progess 26%... Epoch: 2/10
Loss: 4.267 Epoch progess 32%... Epoch: 2/10
Loss: 4.224 Epoch progess 38%... Epoch: 2/10
Loss: 4.209 Epoch progess 43%... Epoch: 2/10
Loss: 4.225 Epoch progess 49%... Epoch: 2/10
Loss: 4.185 Epoch progess 55%... Epoch: 2/10
Loss: 4.183 Epoch progess 61%... Epoch: 2/10
Loss: 4.170 Epoch progess 66%... Epoch: 2/10
Loss: 4.180 Epoch progess 72%... Epoch: 2/10
Loss: 4.170 Epoch progess 78%... Epoch: 2/10
Loss: 4.158 Epoch progess 84%... Epoch: 2/10
Loss: 4.146 Epoch progess 89%... Epoch: 2/10
Loss: 4.129 Epoch progess 95%... Epoch: 2/10
Loss: 4.148 Epoch progess 1%... Epoch: 3/10
Loss: 4.066 Epoch progess 6%... Epoch: 3/10
Loss: 4.021 Epoch progess 12%... Epoch: 3/10
Loss: 4.037 Epoch progess 18%... Epoch: 3/10
Loss: 4.029 Epoch progess 24%... Epoch: 3/10
Loss: 4.034 Epoch progess 29%... Epoch: 3/10
Loss: 4.035 Epoch progess 35%... Epoch: 3/10
Loss: 4.011 Epoch progess 41%... Epoch: 3/10
Loss: 4.018 Epoch progess 47%... Epoch: 3/10
Loss: 4.029 Epoch progess 52%... Epoch: 3/10
Loss: 4.033 Epoch progess 58%... Epoch: 3/10
Loss: 4.002 Epoch progess 64%... Epoch: 3/10
Loss: 4.006 Epoch progess 70%... Epoch: 3/10
Loss: 3.987 Epoch progess 75%... Epoch: 3/10
Loss: 4.017 Epoch progess 81%... Epoch: 3/10
Loss: 4.001 Epoch progess 87%... Epoch: 3/10
Loss: 4.003 Epoch progess 93%... Epoch: 3/10
Loss: 3.966 Epoch progess 98%... Epoch: 3/10
Loss: 3.932 Epoch progess 4%... Epoch: 4/10
Loss: 3.881 Epoch progess 10%... Epoch: 4/10
Loss: 3.909 Epoch progess 15%... Epoch: 4/10
Loss: 3.897 Epoch progess 21%... Epoch: 4/10
Loss: 3.880 Epoch progess 27%... Epoch: 4/10
Loss: 3.890 Epoch progess 33%... Epoch: 4/10
Loss: 3.893 Epoch progess 38%... Epoch: 4/10
Loss: 3.892 Epoch progess 44%... Epoch: 4/10
Loss: 3.906 Epoch progess 50%... Epoch: 4/10
Loss: 3.888 Epoch progess 56%... Epoch: 4/10
Loss: 3.896 Epoch progess 61%... Epoch: 4/10
Loss: 3.878 Epoch progess 67%... Epoch: 4/10
Loss: 3.905 Epoch progess 73%... Epoch: 4/10
Loss: 3.871 Epoch progess 79%... Epoch: 4/10
Loss: 3.882 Epoch progess 84%... Epoch: 4/10
Loss: 3.876 Epoch progess 90%... Epoch: 4/10
Loss: 3.885 Epoch progess 96%... Epoch: 4/10
Loss: 3.850 Epoch progess 1%... Epoch: 5/10
Loss: 3.785 Epoch progess 7%... Epoch: 5/10
Loss: 3.783 Epoch progess 13%... Epoch: 5/10
Loss: 3.787 Epoch progess 19%... Epoch: 5/10
Loss: 3.807 Epoch progess 24%... Epoch: 5/10
Loss: 3.777 Epoch progess 30%... Epoch: 5/10
Loss: 3.789 Epoch progess 36%... Epoch: 5/10
Loss: 3.789 Epoch progess 42%... Epoch: 5/10
Loss: 3.774 Epoch progess 47%... Epoch: 5/10
Loss: 3.783 Epoch progess 53%... Epoch: 5/10
Loss: 3.774 Epoch progess 59%... Epoch: 5/10
Loss: 3.794 Epoch progess 65%... Epoch: 5/10
Loss: 3.793 Epoch progess 70%... Epoch: 5/10
Loss: 3.795 Epoch progess 76%... Epoch: 5/10
Loss: 3.798 Epoch progess 82%... Epoch: 5/10
Loss: 3.808 Epoch progess 87%... Epoch: 5/10
Loss: 3.787 Epoch progess 93%... Epoch: 5/10
Loss: 3.825 Epoch progess 99%... Epoch: 5/10
Loss: 3.723 Epoch progess 5%... Epoch: 6/10
Loss: 3.684 Epoch progess 10%... Epoch: 6/10
Loss: 3.706 Epoch progess 16%... Epoch: 6/10
Loss: 3.689 Epoch progess 22%... Epoch: 6/10
Loss: 3.693 Epoch progess 28%... Epoch: 6/10
Loss: 3.709 Epoch progess 33%... Epoch: 6/10
Loss: 3.723 Epoch progess 39%... Epoch: 6/10
Loss: 3.742 Epoch progess 45%... Epoch: 6/10
Loss: 3.720 Epoch progess 51%... Epoch: 6/10
Loss: 3.692 Epoch progess 56%... Epoch: 6/10
Loss: 3.700 Epoch progess 62%... Epoch: 6/10
Loss: 3.714 Epoch progess 68%... Epoch: 6/10
Loss: 3.703 Epoch progess 73%... Epoch: 6/10
Loss: 3.701 Epoch progess 79%... Epoch: 6/10
Loss: 3.697 Epoch progess 85%... Epoch: 6/10
Loss: 3.733 Epoch progess 91%... Epoch: 6/10
Loss: 3.712 Epoch progess 96%... Epoch: 6/10
Loss: 3.675 Epoch progess 2%... Epoch: 7/10
Loss: 3.606 Epoch progess 8%... Epoch: 7/10
Loss: 3.639 Epoch progess 14%... Epoch: 7/10
Loss: 3.624 Epoch progess 19%... Epoch: 7/10
Loss: 3.655 Epoch progess 25%... Epoch: 7/10
Loss: 3.633 Epoch progess 31%... Epoch: 7/10
Loss: 3.615 Epoch progess 37%... Epoch: 7/10
Loss: 3.637 Epoch progess 42%... Epoch: 7/10
Loss: 3.642 Epoch progess 48%... Epoch: 7/10
Loss: 3.636 Epoch progess 54%... Epoch: 7/10
Loss: 3.652 Epoch progess 59%... Epoch: 7/10
Loss: 3.629 Epoch progess 65%... Epoch: 7/10
Loss: 3.634 Epoch progess 71%... Epoch: 7/10
Loss: 3.651 Epoch progess 77%... Epoch: 7/10
Loss: 3.648 Epoch progess 82%... Epoch: 7/10
Loss: 3.648 Epoch progess 88%... Epoch: 7/10
Loss: 3.627 Epoch progess 94%... Epoch: 7/10
Loss: 3.658 Epoch progess 100%... Epoch: 7/10
Loss: 3.526 Epoch progess 5%... Epoch: 8/10
Loss: 3.553 Epoch progess 11%... Epoch: 8/10
Loss: 3.554 Epoch progess 17%... Epoch: 8/10
Loss: 3.574 Epoch progess 23%... Epoch: 8/10
Loss: 3.586 Epoch progess 28%... Epoch: 8/10
Loss: 3.586 Epoch progess 34%... Epoch: 8/10
Loss: 3.544 Epoch progess 40%... Epoch: 8/10
Loss: 3.566 Epoch progess 45%... Epoch: 8/10
Loss: 3.574 Epoch progess 51%... Epoch: 8/10
Loss: 3.560 Epoch progess 57%... Epoch: 8/10
Loss: 3.593 Epoch progess 63%... Epoch: 8/10
Loss: 3.574 Epoch progess 68%... Epoch: 8/10
Loss: 3.583 Epoch progess 74%... Epoch: 8/10
Loss: 3.591 Epoch progess 80%... Epoch: 8/10
Loss: 3.597 Epoch progess 86%... Epoch: 8/10
Loss: 3.573 Epoch progess 91%... Epoch: 8/10
Loss: 3.586 Epoch progess 97%... Epoch: 8/10
Loss: 3.534 Epoch progess 3%... Epoch: 9/10
Loss: 3.448 Epoch progess 8%... Epoch: 9/10
Loss: 3.497 Epoch progess 14%... Epoch: 9/10
Loss: 3.484 Epoch progess 20%... Epoch: 9/10
Loss: 3.502 Epoch progess 26%... Epoch: 9/10
Loss: 3.491 Epoch progess 31%... Epoch: 9/10
Loss: 3.549 Epoch progess 37%... Epoch: 9/10
Loss: 3.498 Epoch progess 43%... Epoch: 9/10
Loss: 3.519 Epoch progess 49%... Epoch: 9/10
Loss: 3.523 Epoch progess 54%... Epoch: 9/10
Loss: 3.524 Epoch progess 60%... Epoch: 9/10
Loss: 3.520 Epoch progess 66%... Epoch: 9/10
Loss: 3.517 Epoch progess 72%... Epoch: 9/10
Loss: 3.517 Epoch progess 77%... Epoch: 9/10
Loss: 3.528 Epoch progess 83%... Epoch: 9/10
Loss: 3.521 Epoch progess 89%... Epoch: 9/10
Loss: 3.521 Epoch progess 95%... Epoch: 9/10
Loss: 3.541 Epoch progess 0%... Epoch: 10/10
Loss: 3.420 Epoch progess 6%... Epoch: 10/10
Loss: 3.447 Epoch progess 12%... Epoch: 10/10
Loss: 3.431 Epoch progess 17%... Epoch: 10/10
Loss: 3.421 Epoch progess 23%... Epoch: 10/10
Loss: 3.439 Epoch progess 29%... Epoch: 10/10
Loss: 3.460 Epoch progess 35%... Epoch: 10/10
Loss: 3.440 Epoch progess 40%... Epoch: 10/10
Loss: 3.454 Epoch progess 46%... Epoch: 10/10
Loss: 3.472 Epoch progess 52%... Epoch: 10/10
Loss: 3.483 Epoch progess 58%... Epoch: 10/10
Loss: 3.460 Epoch progess 63%... Epoch: 10/10
Loss: 3.459 Epoch progess 69%... Epoch: 10/10
Loss: 3.483 Epoch progess 75%... Epoch: 10/10
Loss: 3.474 Epoch progess 81%... Epoch: 10/10
Loss: 3.459 Epoch progess 86%... Epoch: 10/10
Loss: 3.475 Epoch progess 92%... Epoch: 10/10
Loss: 3.483 Epoch progess 98%... Epoch: 10/10
Model Trained and Saved
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here)+ I tried different sequence lenghts and using 200 and 15 values gave me similar results, but smaller value decreases iteration time, so I chose it.+ I tried hidden_dim 256, 512 and 1024. 512 seem to be the optimal choice.+ I tried n_layers 3, but it increased iteration time considerably, so I decided to stick with 2. + I chose batch size of 1024, this is the largest one my card can handle. I started with 50 and it was very slow.+ Emdedding size and learning rate are the same as in sentimentRNN notebook. I tried to change embedding size, but 400 seem to work well. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
jerry: o!"
elaine:" oh, you know you know, i was wondering... i think i'm a little tired of this. i was wondering if i could have to go to a hospital with a little while.
jerry: well, i don't know..
elaine: what?
jerry: well, i'm sorry to go. i got a good time.
kramer:(to jerry) you know, the whole thing was the only time i ever had.
jerry: oh, you know, you should have a good name, and i don't have any idea how i am.
george: i know. i was just wondering...
george:(to jerry) you know, it's like an old person who lives out.
george: yeah, i think i'm going to get out of your life!
kramer: hey.
jerry:(to elaine) you know, i think you could have to get out of the way to go.
george: what do you mean, you know, i was just thinking about the other person.
george: yeah, but, i'm not a good guy.
george:(smiling as he gets up) oh, i'm not gonna get some help.(kramer leaves)
kramer:(to jerry and george) hey, what about the car?
kramer: no, you got a problem?(jerry shakes the head and takes it off)
jerry: hey, hey!
jerry: oh, hi.
george: hey.(he leaves)
george: hey, hey.
elaine: hey.
kramer:(to jerry) i know, i was just curious.
elaine: what?
jerry: i think we could get the car.
kramer: yeah.
elaine: yeah?
elaine: yeah, yeah.
jerry:(still in the room) oh, you know, the whole thing is a good story, i think i
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_counts = Counter(text)
# sorting the words from most to least frequent in text occurrence
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
# create int_to_vocab and vocab_to_int dictionaries
int_to_vocab = {i: word for i, word in enumerate(sorted_vocab)}
vocab_to_int = {word: i for i, word in int_to_vocab.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
dict_punc={}
dict_punc['.']= '||Period||'
dict_punc[',']= '||Comma||'
dict_punc['"']= '||QuotationMark||'
dict_punc[';']= '||Semicolon||'
dict_punc['!']= '||ExclamationMark||'
dict_punc['?']= '||QuestionMark||'
dict_punc['(']= '||LeftParentheses||'
dict_punc[')']= '||RightParentheses||'
dict_punc['-']= '||Dash||'
dict_punc['\n']= '||Return||'
return dict_punc
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
import torch
import numpy as np
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
overall_batch_size = batch_size * sequence_length
n_batches = len(words)//overall_batch_size
words = words[:n_batches * overall_batch_size]
features = []
targets = []
for n in range(0, len(words) - sequence_length):
extract = words[n:n+sequence_length+1]
features.append(extract[:-1])
targets.append(extract[-1])
data = TensorDataset(torch.from_numpy(np.asarray(features)),torch.from_numpy(np.asarray(targets)))
data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size, shuffle= True)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[29, 30, 31, 32, 33],
[18, 19, 20, 21, 22],
[12, 13, 14, 15, 16],
[24, 25, 26, 27, 28],
[ 9, 10, 11, 12, 13],
[22, 23, 24, 25, 26],
[ 0, 1, 2, 3, 4],
[42, 43, 44, 45, 46],
[44, 45, 46, 47, 48],
[17, 18, 19, 20, 21]])
torch.Size([10])
tensor([34, 23, 17, 29, 14, 27, 5, 47, 49, 22])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.hidden_dim = hidden_dim
self.n_layers = n_layers
# define model layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,dropout=dropout, batch_first=True)
self.fc = nn.Linear(hidden_dim, output_size)
self.sig = nn.Sigmoid()
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
# determine new output and new hidden state
lstm_out, hidden = self.lstm(self.embedding(nn_input), hidden)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# format output by providning last batch of labels
out = self.fc(lstm_out)
out = out.view(batch_size, -1, self.output_size)
out = out[:,-1]
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if(train_on_gpu):
rnn.cuda()
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
h = tuple([each.data for each in hidden])
rnn.zero_grad()
output, h = rnn(inp, h)
# calculate the loss and perform backprop
loss = criterion(output, target)
loss.backward()
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), h
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 75 # of words in a sequence
# Batch Size
batch_size = 256
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 100
# Hidden Dimension
hidden_dim = 512
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 5.468788607597351
Epoch: 1/10 Loss: 4.708209788322448
Epoch: 1/10 Loss: 4.5116928157806395
Epoch: 1/10 Loss: 4.3805000948905946
Epoch: 1/10 Loss: 4.276957492351532
Epoch: 1/10 Loss: 4.202009693145752
Epoch: 2/10 Loss: 4.089056338349434
Epoch: 2/10 Loss: 3.9647012057304383
Epoch: 2/10 Loss: 3.9526324934959414
Epoch: 2/10 Loss: 3.9264090890884398
Epoch: 2/10 Loss: 3.8922725286483764
Epoch: 2/10 Loss: 3.886980987548828
Epoch: 3/10 Loss: 3.793838778587237
Epoch: 3/10 Loss: 3.698517692089081
Epoch: 3/10 Loss: 3.694029365062714
Epoch: 3/10 Loss: 3.693326570510864
Epoch: 3/10 Loss: 3.6970602197647096
Epoch: 3/10 Loss: 3.681812164783478
Epoch: 4/10 Loss: 3.6275707034843614
Epoch: 4/10 Loss: 3.5559589643478393
Epoch: 4/10 Loss: 3.558397488117218
Epoch: 4/10 Loss: 3.569981798648834
Epoch: 4/10 Loss: 3.550428279876709
Epoch: 4/10 Loss: 3.5651992645263673
Epoch: 5/10 Loss: 3.504249768463151
Epoch: 5/10 Loss: 3.460891184806824
Epoch: 5/10 Loss: 3.4593525285720825
Epoch: 5/10 Loss: 3.4697485666275023
Epoch: 5/10 Loss: 3.466137366771698
Epoch: 5/10 Loss: 3.4760731177330015
Epoch: 6/10 Loss: 3.486960964283275
Epoch: 6/10 Loss: 3.4841730070114134
Epoch: 6/10 Loss: 3.4751413831710813
Epoch: 6/10 Loss: 3.4929092803001405
Epoch: 6/10 Loss: 3.4733638048171995
Epoch: 6/10 Loss: 3.4908827805519103
Epoch: 7/10 Loss: 3.553412668571834
Epoch: 7/10 Loss: 3.596403796195984
Epoch: 7/10 Loss: 3.570721734046936
Epoch: 7/10 Loss: 3.5697068161964416
Epoch: 7/10 Loss: 3.552559132575989
Epoch: 7/10 Loss: 3.5417642970085144
Epoch: 8/10 Loss: 3.4532359601072065
Epoch: 8/10 Loss: 3.4146340131759643
Epoch: 8/10 Loss: 3.4002895312309267
Epoch: 8/10 Loss: 3.392551321029663
Epoch: 8/10 Loss: 3.399917004108429
Epoch: 8/10 Loss: 3.3987038106918335
Epoch: 9/10 Loss: 3.313203875207047
Epoch: 9/10 Loss: 3.2371962213516237
Epoch: 9/10 Loss: 3.2601167340278625
Epoch: 9/10 Loss: 3.265363881111145
Epoch: 9/10 Loss: 3.2717169580459595
Epoch: 9/10 Loss: 3.2813616585731507
Epoch: 10/10 Loss: 3.188305862083073
Epoch: 10/10 Loss: 3.118462990283966
Epoch: 10/10 Loss: 3.1387903776168824
Epoch: 10/10 Loss: 3.1454210653305053
Epoch: 10/10 Loss: 3.1648027210235594
Epoch: 10/10 Loss: 3.1711217975616455
Model Trained and Saved
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here)I played around with the three values for sequence_length, hidden_dim and n_layers until I found a setting which reduced the loss below 3.5 for 10 epochs.**sequence_length = 75** I experimented with several values for sequence length. In general, the shorter the sequence, the faster the training. The longer the sequence, the better the results **hidden_dim = 512**The higher the values, the better the results but also the longer the training took.**n_layers = 2**I took the value suggested in the course, i.e. 2 for n_layers. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
#print (trained_rnn)
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu: # MP change
current_seq = current_seq.cpu() # MP change
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 500 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
jerry:...(they both laugh) i mean it's a lot of pressure, and the big screen.
george:(laughs) oh, i got the money! i'm sure you were in the lobby!
elaine:(shouting) well, i don't think so.
elaine: well, what d'you think?
george: what is that, a vampire?
jerry: well, i was trying to make a little gangrene.
elaine:(looking at the door) well, that's why we just need a job.
jerry:(to george) what are you doing?
george:(looking at her watch) well, i guess.
jerry: so you think i can go out?
jerry: no. no. no...
george:(to jerry) what about you?
elaine: well, i don't know what this is. i don't have to do something like that.
jerry: no, no, no.
jerry: no, i don't think so.
elaine: no.
jerry: i can't do that.
george: i can't do this. you don't wanna have to do this.
jerry: what are you gonna do?
george: you know... i think that's it. i don't want to see him.
jerry: you mean," oh no! i don't want to talk about this. i mean, i know i could go to the movies..."
george: i can't.
jerry: no, you can't......
jerry: no, no... no. i can't. i'm gonna be a good person. i don't have a job...
george: i know, i don't want to see her, so i can get going.
george: you know, i mean, i was just thinking about it, you know, you know what? i mean, what are you doing with this now?
kramer: well, i'm sure you flourish. i mean, what is it about?
jerry: i don't think so.
george: well, maybe you should see each other...
jerry: no, no, i don't know. no, no, no no. i can't go to the bathroom...
george: i can't. i don't think i can get you.
george: no.
jerry: so what do you think?
george: well, i don't
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
# Capture all unique words
text = set(text)
# Use enumerate to number each unique word
# Convert to a dictionary
vocab_to_int = {word: x for x, word in enumerate(text)}
int_to_vocab = {value: key for key, value in vocab_to_int.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
punc_tokens = {'.': '||Period||',
'"': '||Quotation_Mark||',
',': '||Comma||',
';': '||Semicolon||',
'!': '||Exclamation_mark||',
'?': '||Question_mark||',
'(': '||Left_Parentheses||',
')': '||Right_Parentheses||',
'-': '||Dash||',
'\n': '||Return||'
}
return punc_tokens
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# Create list for all features and target
# Convert to tensor later
features = list()
target = list()
# Set a tracking variable equal to sequence_length
# Also set intial index
# Keep track that each loops captures values of
# sequence_length and that it doesn't go out of range
track = sequence_length
idx = 0
while track != len(words):
features.append(words[idx:track])
target.append(words[track])
track += 1
idx += 1
feature_tensor, target_tensor = torch.from_numpy(np.array(features)), torch.from_numpy(np.array(target))
data = TensorDataset(feature_tensor, target_tensor)
dataloader = DataLoader(data, batch_size=batch_size, shuffle=True)
# return a dataloader
return dataloader
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 27, 28, 29, 30, 31],
[ 28, 29, 30, 31, 32],
[ 11, 12, 13, 14, 15],
[ 2, 3, 4, 5, 6],
[ 33, 34, 35, 36, 37],
[ 12, 13, 14, 15, 16],
[ 39, 40, 41, 42, 43],
[ 44, 45, 46, 47, 48],
[ 41, 42, 43, 44, 45],
[ 17, 18, 19, 20, 21]])
torch.Size([10])
tensor([ 32, 33, 16, 7, 38, 17, 44, 49, 46, 22])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.vocab_size = vocab_size
self.output_size = output_size
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.n_layers = n_layers
self.dropout = dropout
# define model layers
self.embed = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
self.dropout = nn.Dropout(0.25)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
embedding = self.embed(nn_input)
output, hidden = self.lstm(embedding, hidden)
output = output.contiguous().view(-1, self.hidden_dim)
output = self.dropout(output)
output = self.fc(output)
output = output.view(batch_size, -1, self.output_size)
# return one batch of output word scores and the hidden state
return output[:, -1], hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if train_on_gpu:
inp, target = inp.cuda(), target.cuda()
# Create new variables for the hidden state
hidden = tuple([each.data for each in hidden])
# Zero out gradients for each loop
rnn.zero_grad()
# Perform backpropagation and optimization
output, hidden = rnn(inp, hidden)
loss = criterion(output, target)
loss.backward()
# Apply clip_grad_norm to prevent exploding gradients
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 256
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 15
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 512
# Number of RNN Layers
n_layers = 3
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 15 epoch(s)...
Epoch: 1/15 Loss: 5.932481727600098
Epoch: 1/15 Loss: 5.609589366912842
Epoch: 1/15 Loss: 4.915335075378418
Epoch: 1/15 Loss: 4.662799255371094
Epoch: 1/15 Loss: 4.524736275672913
Epoch: 1/15 Loss: 4.461187871456146
Epoch: 2/15 Loss: 4.318342820415652
Epoch: 2/15 Loss: 4.241352446079254
Epoch: 2/15 Loss: 4.198463047981262
Epoch: 2/15 Loss: 4.1829080476760865
Epoch: 2/15 Loss: 4.155256084442138
Epoch: 2/15 Loss: 4.141346951007843
Epoch: 3/15 Loss: 4.064674224068479
Epoch: 3/15 Loss: 3.996867968082428
Epoch: 3/15 Loss: 3.989917013168335
Epoch: 3/15 Loss: 3.9693440270423888
Epoch: 3/15 Loss: 3.965656131744385
Epoch: 3/15 Loss: 3.9609901819229125
Epoch: 4/15 Loss: 3.891640623652838
Epoch: 4/15 Loss: 3.830382008075714
Epoch: 4/15 Loss: 3.8481295218467713
Epoch: 4/15 Loss: 3.8383309936523435
Epoch: 4/15 Loss: 3.8204400143623354
Epoch: 4/15 Loss: 3.8388526749610903
Epoch: 5/15 Loss: 3.7714360724619733
Epoch: 5/15 Loss: 3.7292725639343263
Epoch: 5/15 Loss: 3.7187277369499205
Epoch: 5/15 Loss: 3.72862787771225
Epoch: 5/15 Loss: 3.7329050831794737
Epoch: 5/15 Loss: 3.735792852401733
Epoch: 6/15 Loss: 3.6696951813329526
Epoch: 6/15 Loss: 3.630079164505005
Epoch: 6/15 Loss: 3.6309495787620545
Epoch: 6/15 Loss: 3.653272488117218
Epoch: 6/15 Loss: 3.6412178597450255
Epoch: 6/15 Loss: 3.6696426706314087
Epoch: 7/15 Loss: 3.6001835252211345
Epoch: 7/15 Loss: 3.564189603805542
Epoch: 7/15 Loss: 3.566000518321991
Epoch: 7/15 Loss: 3.5795662059783937
Epoch: 7/15 Loss: 3.588450217247009
Epoch: 7/15 Loss: 3.585212655544281
Epoch: 8/15 Loss: 3.5395417828870013
Epoch: 8/15 Loss: 3.5026482157707215
Epoch: 8/15 Loss: 3.512623707294464
Epoch: 8/15 Loss: 3.5089778084754943
Epoch: 8/15 Loss: 3.5217876381874085
Epoch: 8/15 Loss: 3.545771559238434
Epoch: 9/15 Loss: 3.4811750468684406
Epoch: 9/15 Loss: 3.430513657093048
Epoch: 9/15 Loss: 3.456518579483032
Epoch: 9/15 Loss: 3.4504245810508727
Epoch: 9/15 Loss: 3.472285758495331
Epoch: 9/15 Loss: 3.4976372919082643
Epoch: 10/15 Loss: 3.432078034897161
Epoch: 10/15 Loss: 3.378758978843689
Epoch: 10/15 Loss: 3.414504225730896
Epoch: 10/15 Loss: 3.423582736492157
Epoch: 10/15 Loss: 3.425485800266266
Epoch: 10/15 Loss: 3.4443059549331667
Epoch: 11/15 Loss: 3.390011310335097
Epoch: 11/15 Loss: 3.3476593317985537
Epoch: 11/15 Loss: 3.3624373059272767
Epoch: 11/15 Loss: 3.3882552642822263
Epoch: 11/15 Loss: 3.378609456539154
Epoch: 11/15 Loss: 3.4005018153190614
Epoch: 12/15 Loss: 3.3481186279436437
Epoch: 12/15 Loss: 3.312580629825592
Epoch: 12/15 Loss: 3.3290738682746888
Epoch: 12/15 Loss: 3.3391229257583617
Epoch: 12/15 Loss: 3.3450065274238585
Epoch: 12/15 Loss: 3.360274095058441
Epoch: 13/15 Loss: 3.313242359616892
Epoch: 13/15 Loss: 3.273540425777435
Epoch: 13/15 Loss: 3.287512360572815
Epoch: 13/15 Loss: 3.2953737897872926
Epoch: 13/15 Loss: 3.322347050666809
Epoch: 13/15 Loss: 3.3345552592277525
Epoch: 14/15 Loss: 3.280492841470532
Epoch: 14/15 Loss: 3.2335841975212096
Epoch: 14/15 Loss: 3.252591944694519
Epoch: 14/15 Loss: 3.257689573764801
Epoch: 14/15 Loss: 3.275568275928497
Epoch: 14/15 Loss: 3.307724580287933
Epoch: 15/15 Loss: 3.2501531638265626
Epoch: 15/15 Loss: 3.2193027510643004
Epoch: 15/15 Loss: 3.2298267803192138
Epoch: 15/15 Loss: 3.2346806817054747
Epoch: 15/15 Loss: 3.250320815086365
Epoch: 15/15 Loss: 3.2484185514450075
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here)My models hyperparameters were mainly influenced by the standard recommended parameters for an RNN model. I started of by using a batch size of 256. As opposed to having larger batch sizes or smaller batch size, the standard is around 128-256. I started of by us 256 hidden dimension which I got average results in. After changing it to 512, I noticed significant improvement. As you may notice, the model was still improving and would have continue to improve with more epochs had it not been for my limited GPU hours.For the the number of layers, I also saw improvement when I used 3 layer instead of my initial 2 layers. I applied the recommended 0.001 learning rate and felt no need to alter that particular parameter. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 750 # modify the length to your preference
prime_word = 'elaine' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
elaine: distracted apparatus huhh argh chunky.
jerry: hey, what do you think?
kramer: oh yeah, i know what i did.
jerry: well, you know.. i know, you don't have to talk to you.
george:(to elaine) hey...
george:(to the intercom) you know i got the new plates.
kramer: yeah..(to elaine)...
george:(to the waitress) oh, yeah?
kramer:(laughing) well, you know, i think i'm gonna be a good driver.(to george) so what?
newman: oh, i think you know.
jerry: oh, no, no, no, no...
jerry:(to kramer) i don't know, i think i was just a comedian!
jerry: oh my god.(she leaves)
elaine: hey, jerry.
jerry:(to george) hey, hey, how ya doing?
elaine: what are you talking about?
jerry: i got it.
george: well you know you don't think you have to talk with him?
george: i don't know, i just wanted you to get the money.
elaine: oh, i got a great idea.
kramer: oh, you know i think i was gonna be the usher of a friend of mine.
jerry: well, i guess you could have said something.
kramer:(to jerry) oh, you don't have to go.
kramer: i don't want to talk to him about that, but you don't want to get together.
george: i thought it was the same way in a long time.
george: oh, i know.
kramer: well, you know, i'm gonna call her. i got a lot of coffee.(elaine leaves.)
jerry:(to kramer) hey, what are you doing?
george: i don't know. i don't care.
elaine: i know what i'm gonna do.
elaine: i thought it was a little bit.(to george) hey, how 'bout you?
elaine: well, i was thinking about a lot of people, you know, i was gonna be honest. i'm a man.
jerry:(to elaine) you know i was just curious, i don't know if you don't have any trouble, but i got to get it out.
jerry: i thought you were in a hospital.
elaine: well, i'm gonna go see that...
jerry: oh, i got it. i just wanted to be a little chat. i don't know if it was a little trouble.
kramer: well, you should see the bathrooms to the left.
kramer:(to jerry) well, i just want to see you again.(they shake a kiss)
kramer: yeah.
jerry: oh..........
george: i don't know.
george:(to kramer) what are you doing here?
george: i'm a comedian.
jerry: oh yeah, i got it.
jerry: oh, yeah, yeah.. i don't know what you want.
kramer: i don't think so.
george:(to jerry) i know, but you know, the other door is not a little uncomfortable.
elaine:(to george and george) hey.
elaine:(to elaine) you can't get me to a woman, and i think i was in the shower.
kramer: i got a big problem. i can't get you a discount.(to jerry) so you got a little problem
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
# find out unique text elements
text_uniq = np.unique(text)
# build the dicts
vocab_to_int, int_to_vocab = {},{}
for i, vocab in enumerate(text_uniq):
vocab_to_int[vocab] = i
int_to_vocab[i] = vocab
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token_dict = {}
key_list = ['.',',','"',';','!','?','(',')','-','\n']
value_list = ['||period||','||comma||','||question_mark||','||semicolon||','||exclamation_mark||','||question_mark||',
'||left_parentheses||','||right_parentheses||','||dash||','||return||']
for key,value in zip(key_list,value_list):
token_dict[key] = value
return token_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# divide words sequence into groups
size_feature = sequence_length
size_target = 1
num_complete_tensors = len(words)-size_feature-size_target+1
feature_tensors, target_tensors = [],[]
for i in range(num_complete_tensors):
cur_chunk = words[i:i+size_feature]
feature_tensors.append(torch.tensor(cur_chunk))
target_tensors.append(torch.tensor(words[i+size_feature:i+size_feature+size_target]))
feature_tensors, target_tensors = torch.stack(feature_tensors, dim=0), torch.stack(target_tensors, dim=0).squeeze()
# prepare dataset and dataloader
data = TensorDataset(feature_tensors, target_tensors)
data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
batch_dl = batch_data(list(int_to_vocab.keys()), 5, 12)
print('Number of batches: {:d}'.format(len(batch_dl)))
for i,(x,y) in enumerate(batch_dl):
if i == 0:
print('data shape:',x.numpy().shape)
print('target shape:',y.numpy().shape)
else:
break
###Output
Number of batches: 1782
data shape: (12, 5)
target shape: (12,)
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]])
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
self.embedding = nn.Embedding(vocab_size,embedding_dim)
# set class variables
self.vocab_size = vocab_size
self.output_size = output_size
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.n_layers = n_layers
# define model layers
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
if train_on_gpu:
nn_input = nn_input.cuda()
batch_size = nn_input.size(0)
embeds = self.embedding(nn_input)
lstm_out, hidden_out = self.lstm(embeds, hidden)
lstm_out = lstm_out.contiguous().view(-1,self.hidden_dim)
out = self.fc(lstm_out)
out = out.view(batch_size, -1, self.output_size)
out = out[:,-1]
# return one batch of output word scores and the hidden state
return out, hidden_out
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
h_0,c_0 = torch.randn(self.n_layers, batch_size, self.hidden_dim), torch.randn(self.n_layers, batch_size, self.hidden_dim)
# initialize hidden state with zero weights, and move to GPU if available
if train_on_gpu:
h_0, c_0 = h_0.cuda(), c_0.cuda()
return (h_0, c_0)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
hidden = tuple([item.data for item in hidden])
if train_on_gpu:
rnn = rnn.cuda()
inp = inp.cuda()
target = target.cuda()
optimizer.zero_grad()
# perform backpropagation and optimization
output, hidden_out = rnn(inp,hidden)
# return the loss over a batch and the hidden state produced by our model
loss = criterion(output,target)
loss.backward()
optimizer.step()
return loss.item(), hidden_out
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 20
# Learning Rate
learning_rate = 1e-3
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 256
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 3
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 20 epoch(s)...
Epoch: 1/20 Loss: 5.8827545433044435
Epoch: 1/20 Loss: 5.471796773910523
Epoch: 1/20 Loss: 4.897974108695984
Epoch: 1/20 Loss: 4.709197922229767
Epoch: 1/20 Loss: 4.672951670646667
Epoch: 1/20 Loss: 4.684121459960937
Epoch: 1/20 Loss: 4.5736645269393925
Epoch: 1/20 Loss: 4.434690805912018
Epoch: 1/20 Loss: 4.401263837337494
Epoch: 1/20 Loss: 4.339982184410095
Epoch: 1/20 Loss: 4.4515293908119205
Epoch: 1/20 Loss: 4.473567490577698
Epoch: 1/20 Loss: 4.472095362663269
Epoch: 2/20 Loss: 4.2811312776354935
Epoch: 2/20 Loss: 4.14436742734909
Epoch: 2/20 Loss: 4.041140449523926
Epoch: 2/20 Loss: 3.984357841014862
Epoch: 2/20 Loss: 4.029985696792602
Epoch: 2/20 Loss: 4.107645743370056
Epoch: 2/20 Loss: 4.037956480503082
Epoch: 2/20 Loss: 3.9308385152816774
Epoch: 2/20 Loss: 3.9431484541893007
Epoch: 2/20 Loss: 3.8947560715675356
Epoch: 2/20 Loss: 4.013376460075379
Epoch: 2/20 Loss: 4.011820290088654
Epoch: 2/20 Loss: 4.020317241191864
Epoch: 3/20 Loss: 3.9347498252049813
Epoch: 3/20 Loss: 3.8635044817924498
Epoch: 3/20 Loss: 3.79850256729126
Epoch: 3/20 Loss: 3.760134382724762
Epoch: 3/20 Loss: 3.804867191314697
Epoch: 3/20 Loss: 3.895087893486023
Epoch: 3/20 Loss: 3.8341637301445006
Epoch: 3/20 Loss: 3.744856756210327
Epoch: 3/20 Loss: 3.7439158334732054
Epoch: 3/20 Loss: 3.6921118960380555
Epoch: 3/20 Loss: 3.814968077659607
Epoch: 3/20 Loss: 3.818302644252777
Epoch: 3/20 Loss: 3.8344378204345704
Epoch: 4/20 Loss: 3.749680287820759
Epoch: 4/20 Loss: 3.7008279113769533
Epoch: 4/20 Loss: 3.6371273674964906
Epoch: 4/20 Loss: 3.6005082235336303
Epoch: 4/20 Loss: 3.644476945400238
Epoch: 4/20 Loss: 3.760942946910858
Epoch: 4/20 Loss: 3.6968390197753904
Epoch: 4/20 Loss: 3.606396040916443
Epoch: 4/20 Loss: 3.592374572753906
Epoch: 4/20 Loss: 3.566641948223114
Epoch: 4/20 Loss: 3.661299341201782
Epoch: 4/20 Loss: 3.681837968349457
Epoch: 4/20 Loss: 3.692543801307678
Epoch: 5/20 Loss: 3.6275897530456325
Epoch: 5/20 Loss: 3.581077600002289
Epoch: 5/20 Loss: 3.532329022884369
Epoch: 5/20 Loss: 3.4922583351135255
Epoch: 5/20 Loss: 3.535411409854889
Epoch: 5/20 Loss: 3.643982630252838
Epoch: 5/20 Loss: 3.6017822046279906
Epoch: 5/20 Loss: 3.5154053139686585
Epoch: 5/20 Loss: 3.4829505133628844
Epoch: 5/20 Loss: 3.4649724316596986
Epoch: 5/20 Loss: 3.573293375492096
Epoch: 5/20 Loss: 3.587392466545105
Epoch: 5/20 Loss: 3.5993476929664614
Epoch: 6/20 Loss: 3.539543304896084
Epoch: 6/20 Loss: 3.4914314432144167
Epoch: 6/20 Loss: 3.4489063720703124
Epoch: 6/20 Loss: 3.409725233078003
Epoch: 6/20 Loss: 3.451918685913086
Epoch: 6/20 Loss: 3.5574803409576417
Epoch: 6/20 Loss: 3.527270511627197
Epoch: 6/20 Loss: 3.4441512217521666
Epoch: 6/20 Loss: 3.4065813708305357
Epoch: 6/20 Loss: 3.3922367510795595
Epoch: 6/20 Loss: 3.498180916309357
Epoch: 6/20 Loss: 3.5089379954338074
Epoch: 6/20 Loss: 3.5202835512161257
Epoch: 7/20 Loss: 3.466540114429344
Epoch: 7/20 Loss: 3.4293387637138366
Epoch: 7/20 Loss: 3.37921977186203
Epoch: 7/20 Loss: 3.3467257833480835
Epoch: 7/20 Loss: 3.3857890625
Epoch: 7/20 Loss: 3.4920310673713684
Epoch: 7/20 Loss: 3.4634868988990783
Epoch: 7/20 Loss: 3.3942826914787294
Epoch: 7/20 Loss: 3.3400264410972595
Epoch: 7/20 Loss: 3.335711709022522
Epoch: 7/20 Loss: 3.435256112575531
Epoch: 7/20 Loss: 3.4504217257499694
Epoch: 7/20 Loss: 3.453887300491333
Epoch: 8/20 Loss: 3.4101044391084874
Epoch: 8/20 Loss: 3.3705474805831908
Epoch: 8/20 Loss: 3.3313017230033877
Epoch: 8/20 Loss: 3.3031418352127075
Epoch: 8/20 Loss: 3.322565710067749
Epoch: 8/20 Loss: 3.4285081362724306
Epoch: 8/20 Loss: 3.4071185383796694
Epoch: 8/20 Loss: 3.3390920276641847
Epoch: 8/20 Loss: 3.285259199619293
Epoch: 8/20 Loss: 3.28136901140213
Epoch: 8/20 Loss: 3.3814146580696107
Epoch: 8/20 Loss: 3.3921678657531737
Epoch: 8/20 Loss: 3.402129384994507
Epoch: 9/20 Loss: 3.367977087465725
Epoch: 9/20 Loss: 3.3292803716659547
Epoch: 9/20 Loss: 3.285241536617279
Epoch: 9/20 Loss: 3.2622166481018064
Epoch: 9/20 Loss: 3.28110412979126
Epoch: 9/20 Loss: 3.377574740886688
Epoch: 9/20 Loss: 3.3644859809875487
Epoch: 9/20 Loss: 3.2914617862701414
Epoch: 9/20 Loss: 3.245231585979462
Epoch: 9/20 Loss: 3.24176548910141
Epoch: 9/20 Loss: 3.334145568370819
Epoch: 9/20 Loss: 3.355361909389496
Epoch: 9/20 Loss: 3.362131910800934
Epoch: 10/20 Loss: 3.323807299813742
Epoch: 10/20 Loss: 3.2924648237228396
Epoch: 10/20 Loss: 3.252436424255371
Epoch: 10/20 Loss: 3.225214361667633
Epoch: 10/20 Loss: 3.2471663670539854
Epoch: 10/20 Loss: 3.33991685628891
Epoch: 10/20 Loss: 3.330364699840546
Epoch: 10/20 Loss: 3.247652466773987
Epoch: 10/20 Loss: 3.2045754976272582
Epoch: 10/20 Loss: 3.210772439956665
Epoch: 10/20 Loss: 3.3014297075271606
Epoch: 10/20 Loss: 3.312302227973938
Epoch: 10/20 Loss: 3.32081632900238
Epoch: 11/20 Loss: 3.2925097076270355
Epoch: 11/20 Loss: 3.2598066701889037
Epoch: 11/20 Loss: 3.2215188722610475
Epoch: 11/20 Loss: 3.192022349834442
Epoch: 11/20 Loss: 3.207159646987915
Epoch: 11/20 Loss: 3.3021978573799133
Epoch: 11/20 Loss: 3.295800669670105
Epoch: 11/20 Loss: 3.2167235169410704
Epoch: 11/20 Loss: 3.1715054354667664
Epoch: 11/20 Loss: 3.180895182132721
Epoch: 11/20 Loss: 3.2694139246940614
Epoch: 11/20 Loss: 3.2763503761291504
Epoch: 11/20 Loss: 3.2853624773025514
Epoch: 12/20 Loss: 3.2595213640585032
Epoch: 12/20 Loss: 3.228976185321808
Epoch: 12/20 Loss: 3.1904720010757446
Epoch: 12/20 Loss: 3.1677104144096373
Epoch: 12/20 Loss: 3.1771764454841613
Epoch: 12/20 Loss: 3.2625682702064513
Epoch: 12/20 Loss: 3.2598324780464174
Epoch: 12/20 Loss: 3.1825680804252623
Epoch: 12/20 Loss: 3.14391504573822
Epoch: 12/20 Loss: 3.1515842213630676
Epoch: 12/20 Loss: 3.2465812997817993
Epoch: 12/20 Loss: 3.255126944065094
Epoch: 12/20 Loss: 3.252337818145752
Epoch: 13/20 Loss: 3.228453645273136
Epoch: 13/20 Loss: 3.1969143118858336
Epoch: 13/20 Loss: 3.160392182350159
Epoch: 13/20 Loss: 3.1498020768165587
Epoch: 13/20 Loss: 3.1527626843452454
Epoch: 13/20 Loss: 3.23797203540802
Epoch: 13/20 Loss: 3.2397040314674377
Epoch: 13/20 Loss: 3.159898642539978
Epoch: 13/20 Loss: 3.112909959793091
Epoch: 13/20 Loss: 3.126127009868622
Epoch: 13/20 Loss: 3.216250279903412
Epoch: 13/20 Loss: 3.2206654920578
Epoch: 13/20 Loss: 3.2250590567588806
Epoch: 14/20 Loss: 3.2057413028858766
Epoch: 14/20 Loss: 3.174961276054382
Epoch: 14/20 Loss: 3.1433686108589174
Epoch: 14/20 Loss: 3.128344171047211
Epoch: 14/20 Loss: 3.1300667433738707
Epoch: 14/20 Loss: 3.21014280462265
Epoch: 14/20 Loss: 3.2106705827713014
Epoch: 14/20 Loss: 3.1333493208885193
Epoch: 14/20 Loss: 3.0931936955451964
Epoch: 14/20 Loss: 3.118290696144104
Epoch: 14/20 Loss: 3.1940725479125978
Epoch: 14/20 Loss: 3.1885512571334838
Epoch: 14/20 Loss: 3.198356782913208
Epoch: 15/20 Loss: 3.1788542937445077
Epoch: 15/20 Loss: 3.1494881253242495
Epoch: 15/20 Loss: 3.12201313829422
Epoch: 15/20 Loss: 3.1047225017547606
Epoch: 15/20 Loss: 3.1048373336791992
Epoch: 15/20 Loss: 3.1902570605278013
Epoch: 15/20 Loss: 3.184747663974762
Epoch: 15/20 Loss: 3.1126883883476255
Epoch: 15/20 Loss: 3.0690169372558596
Epoch: 15/20 Loss: 3.0904337477684023
Epoch: 15/20 Loss: 3.170151035308838
Epoch: 15/20 Loss: 3.176811032772064
Epoch: 15/20 Loss: 3.1765131573677063
Epoch: 16/20 Loss: 3.1604603412104586
Epoch: 16/20 Loss: 3.1303557682037355
Epoch: 16/20 Loss: 3.1007839751243593
Epoch: 16/20 Loss: 3.090423481464386
Epoch: 16/20 Loss: 3.080367215156555
Epoch: 16/20 Loss: 3.1715230650901796
Epoch: 16/20 Loss: 3.1666041359901427
Epoch: 16/20 Loss: 3.092101203918457
Epoch: 16/20 Loss: 3.0528088274002076
Epoch: 16/20 Loss: 3.0694243574142455
Epoch: 16/20 Loss: 3.1546624503135683
Epoch: 16/20 Loss: 3.1471745347976685
Epoch: 16/20 Loss: 3.1516128964424133
Epoch: 17/20 Loss: 3.146731736367209
Epoch: 17/20 Loss: 3.1116217761039735
Epoch: 17/20 Loss: 3.0825230402946473
Epoch: 17/20 Loss: 3.074547887802124
Epoch: 17/20 Loss: 3.0634388513565063
Epoch: 17/20 Loss: 3.1554924945831297
Epoch: 17/20 Loss: 3.145382293701172
Epoch: 17/20 Loss: 3.0732890739440917
Epoch: 17/20 Loss: 3.0398951873779296
Epoch: 17/20 Loss: 3.0498748807907106
Epoch: 17/20 Loss: 3.1394288368225096
Epoch: 17/20 Loss: 3.128764380455017
Epoch: 17/20 Loss: 3.131531785964966
Epoch: 18/20 Loss: 3.126443617853218
Epoch: 18/20 Loss: 3.097953369140625
Epoch: 18/20 Loss: 3.069013321876526
Epoch: 18/20 Loss: 3.059356016159058
Epoch: 18/20 Loss: 3.0507427201271056
Epoch: 18/20 Loss: 3.1432304277420045
Epoch: 18/20 Loss: 3.129775236606598
Epoch: 18/20 Loss: 3.058178415775299
Epoch: 18/20 Loss: 3.0284024324417116
Epoch: 18/20 Loss: 3.041048900604248
Epoch: 18/20 Loss: 3.1125103726387024
Epoch: 18/20 Loss: 3.1079008116722107
Epoch: 18/20 Loss: 3.114216778755188
Epoch: 19/20 Loss: 3.1124168340389695
Epoch: 19/20 Loss: 3.080207706451416
Epoch: 19/20 Loss: 3.0497253031730653
Epoch: 19/20 Loss: 3.045476428985596
Epoch: 19/20 Loss: 3.0336588478088378
Epoch: 19/20 Loss: 3.1272361021041872
Epoch: 19/20 Loss: 3.1134327125549315
Epoch: 19/20 Loss: 3.0428115601539614
Epoch: 19/20 Loss: 3.008831652164459
Epoch: 19/20 Loss: 3.027957892894745
Epoch: 19/20 Loss: 3.095298026561737
Epoch: 19/20 Loss: 3.09766544675827
Epoch: 19/20 Loss: 3.120760479927063
Epoch: 20/20 Loss: 3.092309493152473
Epoch: 20/20 Loss: 3.0657439670562745
Epoch: 20/20 Loss: 3.042880407333374
Epoch: 20/20 Loss: 3.033113569736481
Epoch: 20/20 Loss: 3.0219835548400877
Epoch: 20/20 Loss: 3.1120588240623475
Epoch: 20/20 Loss: 3.100849515914917
Epoch: 20/20 Loss: 3.0251371483802796
Epoch: 20/20 Loss: 2.994367801189423
Epoch: 20/20 Loss: 3.0078635754585266
Epoch: 20/20 Loss: 3.076541030406952
Epoch: 20/20 Loss: 3.0813296260833742
Epoch: 20/20 Loss: 3.0846407613754274
Model Trained and Saved
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** By a few trial-and-error tests. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
jerry: a half of us.
jerry: what"
kramer: yeah.
kramer: hey, i got a castle.
elaine: i think i could go to the bathroom today. i'm going to be honest with the girl.
george: well, i know how much it is. i mean, you don't want to be ashamed of us. i mean, i would have to get you a little bit.
jerry: i think you're a real animal.
captain: you can't do it.
jerry: you can't get a lesson.
kramer: well, i can't get a call with you to the *have*.
jerry: so you don't want it"
george: no, i got it from the pet and then i can go back to the bathroom.
george: i don't understand.
george: you know what" what is that"
kramer: oh, no, no, no!
jerry: i don't care!
jerry:(to himself) hey.
jerry: hey!
jerry: i got a challenge!
elaine: oh, no.
jerry:(agonised) oh, i'm sure i'm not a fine guy.
jerry: so...
elaine: oh, thank you, lloyd question, and you know, i don't know what the electricity is.
jerry:(doubtful) what do you mean"
jerry: i don't know, but you know what they did with the same thing.
jerry:(horrified) oh, i can't believe that. i can't believe you could do something.
george: what about the movie"
kramer: yeah.
elaine: so, uh, uh, what are you doing"
george: i think it's a comedian, and it's a good one, and the only thing i was in korea, and the police will be.
elaine: oh, you know what, what"
sally: i can't do it!
estelle: i don't care
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
vocab_to_int = {}
int_to_vocab = {}
# Build a dictionary that maps words to integers
for index, word in enumerate(set(text)):
vocab_to_int[word] = index
int_to_vocab[index] = word
## Other way of a creating a lookup table:
## This is shorter implementation and personally looks prettier,
## but uses more computing power, since it would loop through the text twice
# vocab_to_int = {word: index for index, word in enumerate(set(text))}
# int_to_vocab = {index: word for word, index in vocab_to_int.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
punctuation_tokens = {
'.' : '||Period||',
',' : '||Comma||',
'"' : '||Quotation_Mark||',
';' : '||Semicolon||',
'!' : '||Exclamation_Mark||',
'?' : '||Question_Mark||',
'(' : '||Left_Parentheses||',
')' : '||Right_Parentheses||',
'-' : '||Dash||',
'\n': '||Return||'
}
return punctuation_tokens
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
n_batches = len(words)//batch_size
# Get only full batches
words = words[:n_batches*batch_size]
# Get words sequence length
words_seq = len(words) - sequence_length
# Initialize features and targets array
features, targets = [], []
# Iterate through words_seq array
for index in range(0, words_seq):
features.append(words[index: index + sequence_length])
targets.append(words[index + sequence_length])
# Create Tensor datasets
data = TensorDataset(torch.from_numpy(np.array(features)), torch.from_numpy(np.array(targets)))
# Define DataLoader with SHUFFLE enabled
data_loader = torch.utils.data.DataLoader(data, shuffle=True, batch_size=batch_size)
# Return a dataloader
return data_loader
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 34, 35, 36, 37, 38],
[ 42, 43, 44, 45, 46],
[ 18, 19, 20, 21, 22],
[ 26, 27, 28, 29, 30],
[ 13, 14, 15, 16, 17],
[ 40, 41, 42, 43, 44],
[ 44, 45, 46, 47, 48],
[ 19, 20, 21, 22, 23],
[ 8, 9, 10, 11, 12],
[ 39, 40, 41, 42, 43]])
torch.Size([10])
tensor([ 39, 47, 23, 31, 18, 45, 49, 24, 13, 44])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# Set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# Define embedding and LSTM layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,
dropout=dropout, batch_first=True)
# Define dropout layer
self.dropout = nn.Dropout(dropout)
# Define linear and sigmoid layers
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
batch_size = nn_input.size(0)
# Apply embeddings and lstm_out
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
# Apply stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# Fully-connected layer
out = self.fc(lstm_out)
# Reshape Tensor into (batch_size, seq_length, output_size)
out = out.view(batch_size, -1, self.output_size)
out = out[:, -1] # get last batch
# Return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# if GPU is available, move data to cuda
if(train_on_gpu):
inp, target = inp.cuda(), target.cuda()
# creating new variables for the hidden state
hidden = tuple([each.data for each in hidden])
# apply zero gradients
rnn.zero_grad()
# get the output and hidden state from our RNN model
output, hidden = rnn(inp, hidden)
# perform backpropagation
loss = criterion(output, target)
loss.backward()
# prevent the exploding gradient problem
nn.utils.clip_grad_norm_(rnn.parameters(), 5) # using clipping size 5
# perform optimization
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 15 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
import signal
from contextlib import contextmanager
import requests
DELAY = INTERVAL = 4 * 60 # interval time in seconds
MIN_DELAY = MIN_INTERVAL = 2 * 60
KEEPALIVE_URL = "https://nebula.udacity.com/api/v1/remote/keep-alive"
TOKEN_URL = "http://metadata.google.internal/computeMetadata/v1/instance/attributes/keep_alive_token"
TOKEN_HEADERS = {"Metadata-Flavor":"Google"}
def _request_handler(headers):
def _handler(signum, frame):
requests.request("POST", KEEPALIVE_URL, headers=headers)
return _handler
@contextmanager
def active_session(delay=DELAY, interval=INTERVAL):
"""
Example:
from workspace_utils import active session
with active_session():
# do long-running work here
"""
token = requests.request("GET", TOKEN_URL, headers=TOKEN_HEADERS).text
headers = {'Authorization': "STAR " + token}
delay = max(delay, MIN_DELAY)
interval = max(interval, MIN_INTERVAL)
original_handler = signal.getsignal(signal.SIGALRM)
try:
signal.signal(signal.SIGALRM, _request_handler(headers))
signal.setitimer(signal.ITIMER_REAL, delay, interval)
yield
finally:
signal.signal(signal.SIGALRM, original_handler)
signal.setitimer(signal.ITIMER_REAL, 0)
def keep_awake(iterable, delay=DELAY, interval=INTERVAL):
"""
Example:
from workspace_utils import keep_awake
for i in keep_awake(range(5)):
# do iteration with lots of work here
"""
with active_session(delay, interval): yield from iterable
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
with active_session():
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 5.647586232185364
Epoch: 1/10 Loss: 4.915851899147034
Epoch: 1/10 Loss: 4.708997131824494
Epoch: 1/10 Loss: 4.584449920654297
Epoch: 1/10 Loss: 4.459322264194489
Epoch: 1/10 Loss: 4.3853831396102905
Epoch: 1/10 Loss: 4.355678760528565
Epoch: 1/10 Loss: 4.329544389247895
Epoch: 1/10 Loss: 4.2798150453567505
Epoch: 1/10 Loss: 4.2664533681869505
Epoch: 1/10 Loss: 4.207230734825134
Epoch: 1/10 Loss: 4.194660914421082
Epoch: 1/10 Loss: 4.160004375934601
Epoch: 2/10 Loss: 4.077065358723491
Epoch: 2/10 Loss: 3.9927687644958496
Epoch: 2/10 Loss: 3.9700258765220644
Epoch: 2/10 Loss: 3.965728746891022
Epoch: 2/10 Loss: 3.944202760219574
Epoch: 2/10 Loss: 3.9456622161865234
Epoch: 2/10 Loss: 3.9429451003074645
Epoch: 2/10 Loss: 3.9441390428543093
Epoch: 2/10 Loss: 3.9431638832092286
Epoch: 2/10 Loss: 3.9218858842849733
Epoch: 2/10 Loss: 3.94416619682312
Epoch: 2/10 Loss: 3.9136468710899353
Epoch: 2/10 Loss: 3.9300042304992675
Epoch: 3/10 Loss: 3.8301444031482887
Epoch: 3/10 Loss: 3.747689549922943
Epoch: 3/10 Loss: 3.7435680756568908
Epoch: 3/10 Loss: 3.7652660818099974
Epoch: 3/10 Loss: 3.780910517215729
Epoch: 3/10 Loss: 3.7442073192596435
Epoch: 3/10 Loss: 3.7715844507217406
Epoch: 3/10 Loss: 3.7534438972473145
Epoch: 3/10 Loss: 3.7664890785217287
Epoch: 3/10 Loss: 3.7494632449150087
Epoch: 3/10 Loss: 3.7663048615455628
Epoch: 3/10 Loss: 3.785918951511383
Epoch: 3/10 Loss: 3.7936558322906495
Epoch: 4/10 Loss: 3.6896489784737265
Epoch: 4/10 Loss: 3.6305820322036744
Epoch: 4/10 Loss: 3.6111019825935364
Epoch: 4/10 Loss: 3.629357347488403
Epoch: 4/10 Loss: 3.6163987798690798
Epoch: 4/10 Loss: 3.641990068435669
Epoch: 4/10 Loss: 3.628404330253601
Epoch: 4/10 Loss: 3.643685276031494
Epoch: 4/10 Loss: 3.662715617656708
Epoch: 4/10 Loss: 3.663536382675171
Epoch: 4/10 Loss: 3.6722543935775755
Epoch: 4/10 Loss: 3.663912796974182
Epoch: 4/10 Loss: 3.6782504148483275
Epoch: 5/10 Loss: 3.590292312882163
Epoch: 5/10 Loss: 3.5230266184806824
Epoch: 5/10 Loss: 3.537108985424042
Epoch: 5/10 Loss: 3.536691324710846
Epoch: 5/10 Loss: 3.552379696846008
Epoch: 5/10 Loss: 3.549173876285553
Epoch: 5/10 Loss: 3.5439232172966
Epoch: 5/10 Loss: 3.564034878730774
Epoch: 5/10 Loss: 3.5638724241256714
Epoch: 5/10 Loss: 3.5705464839935304
Epoch: 5/10 Loss: 3.5798576941490174
Epoch: 5/10 Loss: 3.590088435649872
Epoch: 5/10 Loss: 3.6058732466697694
Epoch: 6/10 Loss: 3.5092677732637108
Epoch: 6/10 Loss: 3.424694261074066
Epoch: 6/10 Loss: 3.4490982160568238
Epoch: 6/10 Loss: 3.44060208940506
Epoch: 6/10 Loss: 3.4697526264190675
Epoch: 6/10 Loss: 3.4857979879379273
Epoch: 6/10 Loss: 3.4981783933639528
Epoch: 6/10 Loss: 3.4996261191368103
Epoch: 6/10 Loss: 3.497036925792694
Epoch: 6/10 Loss: 3.4993367347717284
Epoch: 6/10 Loss: 3.5351843285560607
Epoch: 6/10 Loss: 3.5169530653953553
Epoch: 6/10 Loss: 3.5304459300041198
Epoch: 7/10 Loss: 3.453754098454783
Epoch: 7/10 Loss: 3.379715575695038
Epoch: 7/10 Loss: 3.393222677230835
Epoch: 7/10 Loss: 3.4129773449897765
Epoch: 7/10 Loss: 3.403028757095337
Epoch: 7/10 Loss: 3.423676063537598
Epoch: 7/10 Loss: 3.4175072388648986
Epoch: 7/10 Loss: 3.434573110103607
Epoch: 7/10 Loss: 3.452266815185547
Epoch: 7/10 Loss: 3.4461662254333496
Epoch: 7/10 Loss: 3.4583889331817628
Epoch: 7/10 Loss: 3.46296471118927
Epoch: 7/10 Loss: 3.4933126888275146
Epoch: 8/10 Loss: 3.3884432133564277
Epoch: 8/10 Loss: 3.3305090460777285
Epoch: 8/10 Loss: 3.3453262214660646
Epoch: 8/10 Loss: 3.3505094079971314
Epoch: 8/10 Loss: 3.3469587635993956
Epoch: 8/10 Loss: 3.3652437143325806
Epoch: 8/10 Loss: 3.374142222881317
Epoch: 8/10 Loss: 3.3998689737319947
Epoch: 8/10 Loss: 3.4054835710525513
Epoch: 8/10 Loss: 3.410038697242737
Epoch: 8/10 Loss: 3.405794072628021
Epoch: 8/10 Loss: 3.4406572251319885
Epoch: 8/10 Loss: 3.427229196548462
Epoch: 9/10 Loss: 3.357062491503629
Epoch: 9/10 Loss: 3.2949435071945192
Epoch: 9/10 Loss: 3.288629199028015
Epoch: 9/10 Loss: 3.3159893465042116
Epoch: 9/10 Loss: 3.335582625389099
Epoch: 9/10 Loss: 3.3342440614700317
Epoch: 9/10 Loss: 3.324640080451965
Epoch: 9/10 Loss: 3.3436761827468873
Epoch: 9/10 Loss: 3.340917959690094
Epoch: 9/10 Loss: 3.3731163539886473
Epoch: 9/10 Loss: 3.3803577036857604
Epoch: 9/10 Loss: 3.3862738556861878
Epoch: 9/10 Loss: 3.374608239173889
Epoch: 10/10 Loss: 3.313469806239625
Epoch: 10/10 Loss: 3.256133699417114
Epoch: 10/10 Loss: 3.278370774269104
Epoch: 10/10 Loss: 3.2552844524383544
Epoch: 10/10 Loss: 3.2916567215919494
Epoch: 10/10 Loss: 3.2963435263633727
Epoch: 10/10 Loss: 3.3065272932052614
Epoch: 10/10 Loss: 3.3204574265480042
Epoch: 10/10 Loss: 3.3246264982223512
Epoch: 10/10 Loss: 3.33449991941452
Epoch: 10/10 Loss: 3.343105429649353
Epoch: 10/10 Loss: 3.3348606910705567
Epoch: 10/10 Loss: 3.35474857711792
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:**The majority of the hyperparameters were chosen by experimenting and based on the previous experience, as well as, the lessons before.- I choose `sequence_length = 15`. Using larger `sequence_length` results in the slower training, however very small `sequence_length` reduces the context of the text and results in a higher loss. Through experimenting, I've decided to choose `15` as a reasonable balance between training time and training results.- I choose `batch_size = 128`. I've also tried experimenting `32`, `64` and `256`. However, the larger `batch_size` requires more computational resources so I've settled on `128` as this provided optimal results. I might use a different size, but I believe for my `learning_rate` it's an optimal choice.- I choose `num_epochs = 10`. Generally, training for the higher number of `epochs` potentially results in better network performance. On the other hand, there's a risk of overfitting. Training for too many `epochs` might result in a good performance with training data, but poor performance on the validation set because the network will poorly generalize other/unseen data.- I choose `learning_rate = 0.001`. This `learning_rate` usually works well with Adam's Optimizer in most of the cases, therefore I've decided to stick to the `0.001` as my `learning_rate`.- I choose `embedding_dim = 200`. There a rule that the `embedding dimension` must be smaller than the `vocab_size`. However, getting this right was a result of constant experimentation. I believe it's a good idea to experiment with different `embedding dimension` sizes, starting from `100` to maybe even `1000`, as different `RNN Architectures` uses various sizes. In my case, I found, that using between `200` to `300`, resulted in optimal performance, but in the end, I've chosen `200`.- I choose `hidden_dim = 256`. Usually, it's a good practice to try `hidden dimension` size of `128`, `256` and `512`, maybe even less/more, depending on the data set. It's worth to mention that the higher `hidden dimension` size requires more computational power and smaller size might result in a bad classification.- I choose `n_layers = 2`. I referred to a quote from Andrej Karpathy, where he said: *"In practice, it is often the case that 3-layer neural networks will outperform 2-layer nets, but going even deeper (4,5,6-layer) rarely helps much more."* However, using more layers also requires more computing power and takes a longer time to train. Therefore, I've chosen to use `2` layers. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:42: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_count = Counter(text)
sorted_vocab = sorted(word_count, key=word_count.get, reverse=True)
vocab_to_int = {word:idx for idx,word in enumerate(sorted_vocab)}
int_to_vocab = {idx:word for word,idx in vocab_to_int.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token_dict = {
'.': '||Period||',
',': '||Comma||',
'"': '||Quotation_Mark||',
';': '||Semicolon||',
'!': '||Exclamation_mark||',
'?': '||Question_mark||',
'(': '||Left_Parentheses||',
')': '||Right_Parentheses||',
'-': '||Dash||',
'\n': '||Return||'
}
return token_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
feature_tensors = np.array([words[idx:idx+sequence_length] for idx in range(len(words)-sequence_length+1)])
target_tensors = np.roll(feature_tensors[:, -1], -1)
target_tensors[-1] = feature_tensors[0][0]
data = TensorDataset(torch.from_numpy(feature_tensors), torch.from_numpy(target_tensors))
data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0, 1, 2, 3, 4],
[ 1, 2, 3, 4, 5],
[ 2, 3, 4, 5, 6],
[ 3, 4, 5, 6, 7],
[ 4, 5, 6, 7, 8],
[ 5, 6, 7, 8, 9],
[ 6, 7, 8, 9, 10],
[ 7, 8, 9, 10, 11],
[ 8, 9, 10, 11, 12],
[ 9, 10, 11, 12, 13]])
torch.Size([10])
tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.hidden_dim = hidden_dim
self.n_layers = n_layers
# define model layers
self.embed = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, num_layers=n_layers, dropout=dropout, batch_first=True)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
embed_out = self.embed(nn_input)
lstm_out, hidden = self.lstm(embed_out, hidden)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
out = self.fc(lstm_out)
# reshape into (batch_size, seq_length, output_size)
out = out.view(batch_size, -1, self.output_size)
# get last batch
out = out[:, -1]
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if train_on_gpu:
inp, target = inp.cuda(), target.cuda()
hidden = tuple([each.data for each in hidden])
rnn.zero_grad()
output, hidden = rnn(inp, hidden)
# perform backpropagation and optimization
loss = criterion(output.squeeze(), target)
loss.backward()
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 2000
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 4.925563133716583
Epoch: 1/10 Loss: 4.46799368929863
Epoch: 1/10 Loss: 4.326589870333671
Epoch: 2/10 Loss: 4.092425009504879
Epoch: 2/10 Loss: 3.935730939745903
Epoch: 2/10 Loss: 3.9000248016119
Epoch: 3/10 Loss: 3.796361310829736
Epoch: 3/10 Loss: 3.7286092329025267
Epoch: 3/10 Loss: 3.7069442616701127
Epoch: 4/10 Loss: 3.619192291566843
Epoch: 4/10 Loss: 3.581351883530617
Epoch: 4/10 Loss: 3.57560409617424
Epoch: 5/10 Loss: 3.511567875928612
Epoch: 5/10 Loss: 3.4792624189853667
Epoch: 5/10 Loss: 3.476007718205452
Epoch: 6/10 Loss: 3.4206481310766366
Epoch: 6/10 Loss: 3.3911499347686767
Epoch: 6/10 Loss: 3.3962059500217436
Epoch: 7/10 Loss: 3.35711593152538
Epoch: 7/10 Loss: 3.3309505153894423
Epoch: 7/10 Loss: 3.3335655217170714
Epoch: 8/10 Loss: 3.3050681187593405
Epoch: 8/10 Loss: 3.2832074712514876
Epoch: 8/10 Loss: 3.2866541645526888
Epoch: 9/10 Loss: 3.261805445548058
Epoch: 9/10 Loss: 3.237679073691368
Epoch: 9/10 Loss: 3.241973174214363
Epoch: 10/10 Loss: 3.223333733215923
Epoch: 10/10 Loss: 3.2023458824157713
Epoch: 10/10 Loss: 3.204044640421867
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:**First I tried a model with a dropout and sigmoid layers. The model was not performing well and was too slow. The starting loss for the model was >9.5 and did go below 7.0 even after 5 epochsThen after removing the dropout and sigmoid layer, the model started with loss of 4.925563133716583 and went till 3.204044640421867 after 10 epochs* **sequence_length** sequence of 5 gave very slow convergance. Increasing it to 10 gave enough speed* **batch_size** set to 128. higher batch sizes started slowing down the model and required more memory. When using lower batch sizes, loss was oscilating.* **num_epochs** set to 10. Convergence became slower after 10 epochs* **learning_rate** set to 0.001, using trial and error method. 0.0001 had very slow convergence.* **vocab_size** set to number of unique words in our text* **output_size** equal to vocab size, it gives id of next vocab* **hidden_dim** tried hidden dim of 128, 256 and 512. 256 Gave better results in less epochs. 512 was bit slower than 256.* **n_layers set** to 2 based on trial and error. 2 layers gave good results in less time* **show_every_n_batches** set to 2000 as there are too many batches --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:39: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from string import punctuation
from collections import Counter
def create_lookup_tables(text):
# print(text[:2000])
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
all_text = ' '.join([word for word in text]) # consolidate to one string
# all_text = ''.join([c for c in all_text if c not in punctuation]) # remove punctuation
# all_text = all_text.lower() # change upper to lower
print(all_text[:2000]) # test
text_split = all_text.split('\n')
all_text = ''.join(text_split)
words = all_text.split()
word_counts = Counter(words)
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab, 0)}
# int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab, 1)}
# print(type(sorted_vocab)) # <class 'list'>
# print(len(sorted_vocab)) # 71
# print(type(int_to_vocab)) # <class 'dict'>
# print(len(int_to_vocab)) # 71
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
# for i in range(len(int_to_vocab)):
# print(i)
# print(int_to_vocab[i])
# print(vocab_to_int['||Comma||'])
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
moe_szyslak moe's tavern where the elite meet to drink bart_simpson eh yeah hello is mike there last name rotch moe_szyslak hold on i'll check mike rotch mike rotch hey has anybody seen mike rotch lately moe_szyslak listen you little puke one of these days i'm gonna catch you and i'm gonna carve my name on your back with an ice pick moe_szyslak whats the matter homer you're not your normal effervescent self homer_simpson i got my problems moe give me another one moe_szyslak homer hey you should not drink to forget your problems barney_gumble yeah you should only drink to enhance your social skills
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
punc_to_token = {
'.': "||Period||",
',': "||Comma||",
'"': "||Quotation_Mark||",
';': "||Semicolon||",
'!': "||Exclamation_mark||",
'?': "||Question_mark||",
'(': "||Left_Parentheses||",
')': "||Right_Parentheses||",
'-': "||Dash||",
'\n': "||Return||"
}
return punc_to_token
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
this is out ||period|| ||period|| ||period|| and out is one of the single most enjoyable experiences of life ||period|| people ||period|| ||period|| ||period|| did you ever hear people talking about we should go out ||question_mark|| this is what theyre talking about ||period|| ||period|| ||period|| this whole thing ||comma|| were all out now ||comma|| no one is home ||period|| not one person here is home ||comma|| were all out ||exclamation_mark|| there are people trying to find us ||comma|| they dont know where we are ||period|| ||left_parentheses|| on an imaginary phone ||right_parentheses|| did you ring ||question_mark|| ||comma|| i cant find him ||period|| where did he go ||question_mark|| he didnt tell me where he was going ||period|| he must have gone out ||period|| you wanna go out you get ready ||comma|| you pick out the clothes ||comma|| right ||question_mark|| you take the shower ||comma|| you get all ready ||comma|| get the cash ||comma|| get your friends ||comma|| the car ||comma|| the spot ||comma|| the reservation ||period|| ||period|| ||period|| then youre standing around ||comma|| what do you do ||question_mark|| you go we gotta be getting back ||period|| once youre out ||comma|| you wanna get back ||exclamation_mark|| you wanna go to sleep ||comma|| you wanna get up ||comma|| you wanna go out again tomorrow ||comma|| right ||question_mark|| where ever you are in life ||comma|| its my feeling ||comma|| youve gotta go ||period|| ||return|| ||return|| jerry: ||left_parentheses|| pointing at georges shirt ||right_parentheses|| see ||comma|| to me ||comma|| that button is in the worst possible spot ||period|| the second button literally makes or breaks the shirt ||comma|| look at it ||period|| its too high ||exclamation_mark|| its in no ||dash|| mans ||dash|| land ||period|| you look like you live with your mother ||period|| ||return|| ||return|| george: are you through ||question_mark|| ||return|| ||return|| jerry: you do of course try on ||comma|| whe
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
features = np.zeros(((len(words) - sequence_length), sequence_length), dtype=int)
targets = np.zeros((len(words) - sequence_length), dtype=int)
for i in range(len(words) - sequence_length):
features[i] = words[i : i + sequence_length]
targets[i] = words[i + sequence_length]
## test
# print(type(features)) # <class 'numpy.ndarray'>
# print(features[0]) # [0 1 2 3 4]
# print(targets[0]) # 5
# print(features[1]) # [1 2 3 4 5]
# print(targets[1]) # 6
# print(features[len(words) - sequence_length - 1]) # [44 45 46 47 48]
# print(targets[len(words) - sequence_length - 1]) # 49
data = TensorDataset(torch.from_numpy(features), torch.from_numpy(targets))
data_loader = torch.utils.data.DataLoader(data, shuffle=True, batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
## test
#test_text = range(50)
#batch_data(test_text, sequence_length=5, batch_size=10)
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[15, 16, 17, 18, 19],
[24, 25, 26, 27, 28],
[ 5, 6, 7, 8, 9],
[44, 45, 46, 47, 48],
[22, 23, 24, 25, 26],
[ 3, 4, 5, 6, 7],
[21, 22, 23, 24, 25],
[38, 39, 40, 41, 42],
[34, 35, 36, 37, 38],
[ 7, 8, 9, 10, 11]])
torch.Size([10])
tensor([20, 29, 10, 49, 27, 8, 26, 43, 39, 12])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define model layers
self.embd = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout = dropout, batch_first = True)
self.dropout = nn.Dropout(0.3)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# print(type(nn_input))
# print(type(nn_input.size))
batch_size = nn_input.size(0)
# print(batch_size) # 50
# embeddings and lstm_out
embeds = self.embd(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
# print(type(hidden))
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
out = self.dropout(lstm_out)
out = self.fc(out)
# reshape to be batch_size first
out = out.view(batch_size, -1, self.output_size)
out = out[:, -1] # get last batch of outputs
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
clip=5 # gradient clipping
# move data to GPU, if available
if(train_on_gpu):
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
hidden = tuple([each.data for each in hidden])
# zero accumulated gradients
rnn.zero_grad()
# get the output from the model
output, hidden = rnn(inp, hidden)
# calculate the loss and perform backprop
loss = criterion(output.squeeze(), target)
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(rnn.parameters(), clip)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
# print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
# epoch_i, n_epochs, np.average(batch_losses)))
print('Epoch: {:>4}/{:<4} Loss: {}'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 64
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 50
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 256
# Hidden Dimension
hidden_dim = 800
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 640
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 50 epoch(s)...
Epoch: 1/50 Loss: 5.253700656816363
Epoch: 1/50 Loss: 4.782527927309275
Epoch: 1/50 Loss: 4.629239576682449
Epoch: 1/50 Loss: 4.565429873764515
Epoch: 1/50 Loss: 4.454377238079905
Epoch: 1/50 Loss: 4.429828142374754
Epoch: 1/50 Loss: 4.383285737410188
Epoch: 1/50 Loss: 4.351274823397398
Epoch: 1/50 Loss: 4.319598229974508
Epoch: 1/50 Loss: 4.3010002840310335
Epoch: 1/50 Loss: 4.300960117578507
Epoch: 1/50 Loss: 4.22971854172647
Epoch: 1/50 Loss: 4.243915567919612
Epoch: 1/50 Loss: 4.201597314327955
Epoch: 1/50 Loss: 4.236023548990488
Epoch: 1/50 Loss: 4.206151943653822
Epoch: 1/50 Loss: 4.153445649892092
Epoch: 1/50 Loss: 4.19222059212625
Epoch: 1/50 Loss: 4.207992108166218
Epoch: 1/50 Loss: 4.163940661028027
Epoch: 1/50 Loss: 4.169988998025656
Epoch: 2/50 Loss: 4.067798764377021
Epoch: 2/50 Loss: 4.020531105622649
Epoch: 2/50 Loss: 4.037669951096177
Epoch: 2/50 Loss: 4.028175422921777
Epoch: 2/50 Loss: 4.011055763810873
Epoch: 2/50 Loss: 4.035206806287169
Epoch: 2/50 Loss: 4.03218558691442
Epoch: 2/50 Loss: 4.050158818438649
Epoch: 2/50 Loss: 4.02089707441628
Epoch: 2/50 Loss: 4.036111611127853
Epoch: 2/50 Loss: 4.046011611074209
Epoch: 2/50 Loss: 4.00139140971005
Epoch: 2/50 Loss: 4.017154048010707
Epoch: 2/50 Loss: 4.031743712723255
Epoch: 2/50 Loss: 4.035793996602297
Epoch: 2/50 Loss: 4.036028683185577
Epoch: 2/50 Loss: 4.054708698391915
Epoch: 2/50 Loss: 4.037981457635761
Epoch: 2/50 Loss: 4.008058787509799
Epoch: 2/50 Loss: 4.02766479216516
Epoch: 2/50 Loss: 4.02071329690516
Epoch: 3/50 Loss: 3.944538163343368
Epoch: 3/50 Loss: 3.8696200150996445
Epoch: 3/50 Loss: 3.8714216008782385
Epoch: 3/50 Loss: 3.908541977778077
Epoch: 3/50 Loss: 3.898345560953021
Epoch: 3/50 Loss: 3.925573145225644
Epoch: 3/50 Loss: 3.924049727246165
Epoch: 3/50 Loss: 3.9024879980832337
Epoch: 3/50 Loss: 3.921890266239643
Epoch: 3/50 Loss: 3.927990100905299
Epoch: 3/50 Loss: 3.9660375442355873
Epoch: 3/50 Loss: 3.9030546128749846
Epoch: 3/50 Loss: 3.9538080327212812
Epoch: 3/50 Loss: 3.9419815838336945
Epoch: 3/50 Loss: 3.944943027943373
Epoch: 3/50 Loss: 3.914364117011428
Epoch: 3/50 Loss: 3.953220413252711
Epoch: 3/50 Loss: 3.9551561255007983
Epoch: 3/50 Loss: 3.9888630975037813
Epoch: 3/50 Loss: 3.962743601575494
Epoch: 3/50 Loss: 3.964763989672065
Epoch: 4/50 Loss: 3.8705020949336078
Epoch: 4/50 Loss: 3.8253324795514345
Epoch: 4/50 Loss: 3.82480561286211
Epoch: 4/50 Loss: 3.7882513221353293
Epoch: 4/50 Loss: 3.813043589890003
Epoch: 4/50 Loss: 3.847968678548932
Epoch: 4/50 Loss: 3.8585231617093085
Epoch: 4/50 Loss: 3.82973028421402
Epoch: 4/50 Loss: 3.851959860697389
Epoch: 4/50 Loss: 3.8600806016474962
Epoch: 4/50 Loss: 3.8835461150854824
Epoch: 4/50 Loss: 3.8602547336369755
Epoch: 4/50 Loss: 3.8948065619915724
Epoch: 4/50 Loss: 3.8749670960009097
Epoch: 4/50 Loss: 3.8551158852875234
Epoch: 4/50 Loss: 3.9038047298789023
Epoch: 4/50 Loss: 3.8732550203800202
Epoch: 4/50 Loss: 3.9106360882520677
Epoch: 4/50 Loss: 3.9058148112148046
Epoch: 4/50 Loss: 3.907001294568181
Epoch: 4/50 Loss: 3.886706656217575
Epoch: 5/50 Loss: 3.808642924302916
Epoch: 5/50 Loss: 3.740386075153947
Epoch: 5/50 Loss: 3.7607973624020814
Epoch: 5/50 Loss: 3.786749940738082
Epoch: 5/50 Loss: 3.7805189918726683
Epoch: 5/50 Loss: 3.7845669619739057
Epoch: 5/50 Loss: 3.7942766156047583
Epoch: 5/50 Loss: 3.7910614032298326
Epoch: 5/50 Loss: 3.814380073547363
Epoch: 5/50 Loss: 3.8070463545620443
Epoch: 5/50 Loss: 3.815601183101535
Epoch: 5/50 Loss: 3.8312596712261437
Epoch: 5/50 Loss: 3.8114951375871895
Epoch: 5/50 Loss: 3.846424401178956
Epoch: 5/50 Loss: 3.8215173527598383
Epoch: 5/50 Loss: 3.840273302420974
Epoch: 5/50 Loss: 3.8424410365521906
Epoch: 5/50 Loss: 3.8538716416805983
Epoch: 5/50 Loss: 3.8840495612472297
Epoch: 5/50 Loss: 3.839904710277915
Epoch: 5/50 Loss: 3.866504903510213
Epoch: 6/50 Loss: 3.769699031580321
Epoch: 6/50 Loss: 3.7049839947372676
Epoch: 6/50 Loss: 3.733559547737241
Epoch: 6/50 Loss: 3.7120644196867945
Epoch: 6/50 Loss: 3.731315530091524
Epoch: 6/50 Loss: 3.7415507029742003
Epoch: 6/50 Loss: 3.7296938303858043
Epoch: 6/50 Loss: 3.7685168646275997
Epoch: 6/50 Loss: 3.7984403163194655
Epoch: 6/50 Loss: 3.75751036144793
Epoch: 6/50 Loss: 3.740093453601003
Epoch: 6/50 Loss: 3.752611465752125
Epoch: 6/50 Loss: 3.8064439587295054
Epoch: 6/50 Loss: 3.8134737070649862
Epoch: 6/50 Loss: 3.7888033472001554
Epoch: 6/50 Loss: 3.7987955920398235
Epoch: 6/50 Loss: 3.77656222358346
Epoch: 6/50 Loss: 3.818206524848938
Epoch: 6/50 Loss: 3.8012230299413203
Epoch: 6/50 Loss: 3.8344946809113027
Epoch: 6/50 Loss: 3.819535595923662
Epoch: 7/50 Loss: 3.7353359030672495
Epoch: 7/50 Loss: 3.6713490672409534
Epoch: 7/50 Loss: 3.66371367610991
Epoch: 7/50 Loss: 3.6677056018263103
Epoch: 7/50 Loss: 3.688985545933247
Epoch: 7/50 Loss: 3.7449356436729433
Epoch: 7/50 Loss: 3.712756483629346
Epoch: 7/50 Loss: 3.72131201736629
Epoch: 7/50 Loss: 3.7027687944471834
Epoch: 7/50 Loss: 3.7348790619522334
Epoch: 7/50 Loss: 3.7286715917289257
Epoch: 7/50 Loss: 3.737143274396658
Epoch: 7/50 Loss: 3.7802848126739264
Epoch: 7/50 Loss: 3.742055954411626
Epoch: 7/50 Loss: 3.7745925046503546
Epoch: 7/50 Loss: 3.753599840402603
Epoch: 7/50 Loss: 3.7628398548811672
Epoch: 7/50 Loss: 3.7981450211256744
Epoch: 7/50 Loss: 3.818891394138336
Epoch: 7/50 Loss: 3.7802766114473343
Epoch: 7/50 Loss: 3.786262919008732
Epoch: 8/50 Loss: 3.6951171376393277
Epoch: 8/50 Loss: 3.6472966499626636
Epoch: 8/50 Loss: 3.662723157927394
Epoch: 8/50 Loss: 3.6800824109464885
Epoch: 8/50 Loss: 3.652492796629667
Epoch: 8/50 Loss: 3.6624051328748464
Epoch: 8/50 Loss: 3.686840457469225
Epoch: 8/50 Loss: 3.7009461764246225
Epoch: 8/50 Loss: 3.714610445871949
Epoch: 8/50 Loss: 3.6799667228013275
Epoch: 8/50 Loss: 3.730845034122467
Epoch: 8/50 Loss: 3.7200362868607044
Epoch: 8/50 Loss: 3.711618630960584
Epoch: 8/50 Loss: 3.7234140444546937
Epoch: 8/50 Loss: 3.7211993243545294
Epoch: 8/50 Loss: 3.741114177182317
Epoch: 8/50 Loss: 3.750632618740201
Epoch: 8/50 Loss: 3.7483046911656857
Epoch: 8/50 Loss: 3.7457224164158105
Epoch: 8/50 Loss: 3.7504187412559986
Epoch: 8/50 Loss: 3.7798055570572613
Epoch: 9/50 Loss: 3.6881754429744356
Epoch: 9/50 Loss: 3.6242952913045885
Epoch: 9/50 Loss: 3.6272049475461245
Epoch: 9/50 Loss: 3.628608123213053
Epoch: 9/50 Loss: 3.635802112519741
Epoch: 9/50 Loss: 3.6489970050752163
Epoch: 9/50 Loss: 3.6721540220081805
Epoch: 9/50 Loss: 3.6496964756399395
Epoch: 9/50 Loss: 3.6754121251404284
Epoch: 9/50 Loss: 3.656955474615097
Epoch: 9/50 Loss: 3.676737105846405
Epoch: 9/50 Loss: 3.6906002059578897
Epoch: 9/50 Loss: 3.6854868937283753
Epoch: 9/50 Loss: 3.672032630071044
Epoch: 9/50 Loss: 3.733842030912638
Epoch: 9/50 Loss: 3.7055791333317756
Epoch: 9/50 Loss: 3.732548328116536
Epoch: 9/50 Loss: 3.7009195894002915
Epoch: 9/50 Loss: 3.7444977063685654
Epoch: 9/50 Loss: 3.747268568724394
Epoch: 9/50 Loss: 3.733583019673824
Epoch: 10/50 Loss: 3.6582457462458153
Epoch: 10/50 Loss: 3.6209939189255236
Epoch: 10/50 Loss: 3.634983092173934
Epoch: 10/50 Loss: 3.6201229099184276
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here)For sequence_length, I tried 10, 16, 20 and 10 was better than others. And I guessed smaller numbers than 10 might be too small to estimate the next word appropreately from the sequence.For batch_size,I tried only 64. I confirmed that the training was going well. In addition, according to the nvidia-smi command, the memory usage was 3333MiB / 7973MiB. I thought it was appropreate.For num_epochs, I tried 10, 20, 30 and 50. 30 might be appropreate. But I couldn't get the loss lower than 3.0 at the end of the training. So I chose 50 instead of 30.For learning_rate, I tried only 0.001. I found some article that mentioned 0.001 or around numbers were appropreate.For embedding_dim, I tried 128, 256, 400, and 512. At the final, I chose 256 considering that the number of unique words were 46367. Because in the Sentiment_RNN_Exercise exsample, we used 400 for 74072 unique words.For hidden_dim, I tried 128, 256, 400, and 800. At the final, I chose 800 because I couldn't get the loss less than 3.5 using smaller numbers as hidden_dim.For n_layers, I chose 2 because it must be 1-3 from the criteria. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
jerry: i think you're a comedian, i don't want to know how you could make a mistake. you don't even have to pay me to be able to make it up.
elaine:(laughing) i know, it's just the damnedest side.
elaine: what is that?
george:(to himself, picks up his glasses) oh yeah.(kramer walks into the kitchen)
kramer: yeah
jerry:(to elaine) i thought you'd be upset..
jerry: i mean, i don't even have it.(to elaine) so, you want a new george:(pauses. elaine enters)
george: you want to take your shoes?
jerry:(to kramer) oh, i can't believe i just have to be able to get a new suit. i have to tell you, i was thinking of myself, and i was a little rough of the.
george: oh, yeah.
jerry: so, what do you think?
george: well, you know, i think i could take care of you.(jerry leaves)
jerry: hey! hey! hey! i thought you got any good friends.
george: you don't think i could have a ticket, but i don't know how it is.. i don't know..(he pushes jerry in the kitchen; he falls towards his head.)
elaine: oh, yeah. i was a little embarrassed. i was wondering if we could get going.
kramer: well, i gotta take it. i can't come out, i don't have to go to the bathroom..(george leaves)
kramer: yeah, yeah, yeah..(pulls up his bag)...
[setting: jerry's car]
kramer:(looking) hey.
kramer: oh, hey.
elaine:(from phone, to kramer) oh, yeah?
george: yeah.
jerry:(to george) i don't think you could get any bread.
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
counts = Counter(text)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
int_to_vocab = {ii: word for word, ii in vocab_to_int.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token_dict = {'.':'||Period||', ',':'||Comma||', '"':'||Quotation_mark||',
';':'||Semicolon||', '!':'||Exclamation_mark||', '?':'||Question_mark||',
'(':'||Left_parentheses||', ')':'||Right_parentheses||', '-':'||Dash||',
'\n':'||Return||'}
return token_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
import numpy as np
def batch(iterable, n=1):
l = len(iterable)
for ndx in range(0, l, n):
yield iterable[ndx:min(ndx + n, l)]
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
# batch_size_total = batch_size * sequence_length
# n_batches = len(words)//batch_size_total
# words = np.array(words[:n_batches * batch_size_total])
# feature_tensors = []
# target_tensors = []
# y = None
# first=True
# for x in batch(list(words), sequence_length):
# if (first==False):
# y=x[0]
# target_tensors.append(y)
# else:
# first=False
# feature_tensors.append(x)
# target_tensors.append(words[0])
# feature_tensors = torch.Tensor(feature_tensors)
# target_tensors = torch.Tensor(target_tensors)
# data = TensorDataset(feature_tensors, target_tensors)
# data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size, shuffle=True)
# return data_loader
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
y_len = len(words) - sequence_length
x, y = [], []
for idx in range(0, y_len):
idx_end = sequence_length + idx
x_batch = words[idx:idx_end]
x.append(x_batch)
batch_y = words[idx_end]
y.append(batch_y)
# create Tensor datasets
data = TensorDataset(torch.from_numpy(np.asarray(x)), torch.from_numpy(np.asarray(y)))
# make sure the SHUFFLE your training data
data_loader = DataLoader(data, shuffle=True, batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 4, 5, 6, 7, 8],
[34, 35, 36, 37, 38],
[39, 40, 41, 42, 43],
[ 5, 6, 7, 8, 9],
[18, 19, 20, 21, 22],
[43, 44, 45, 46, 47],
[42, 43, 44, 45, 46],
[37, 38, 39, 40, 41],
[ 0, 1, 2, 3, 4],
[28, 29, 30, 31, 32]])
torch.Size([10])
tensor([ 9, 39, 44, 10, 23, 48, 47, 42, 5, 33])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.3):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define model layers
# self.embedding = nn.Embedding(vocab_size, embedding_dim)
# self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,
# dropout=dropout, batch_first=True)
# self.fc = nn.Linear(hidden_dim, output_size)
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
# self.dropout = nn.Dropout(0.3)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# batch_size = nn_input.size(0)
# embeds = self.embedding(nn_input)
# lstm_out, hidden = self.lstm(embeds, hidden)
# lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# out = self.fc(lstm_out)
# out = out.view(batch_size, -1, self.output_size)
# out = out[:, -1]
# # return one batch of output word scores and the hidden state
# return out, hidden
batch_size = nn_input.size(0)
# print("batch_size: ", batch_size)
# print("nn_input.shape: ", nn_input.shape) #(batch_size, sequence_len)
# print("hidden[0].shape: ", hidden[0].shape) #(n_layers, batch_size, hidden_dim)
# nn_input = nn_input.long()
embed = self.embedding(nn_input) #(batch_size, sequence_len, embedding_dim)
# print("embed.shape: ", embed.shape)
lstm_out, hidden = self.lstm(embed, hidden) #(batch_size, sequence_len, hidden_dim)
# print("lstm_output.shape: ", lstm_out.shape)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim) #(batch_size*sequence_len, hidden_dim)
# print("lstm_output.shape: ", lstm_out.shape)
# lstm_out = self.dropout(lstm_out) #(batch_size*sequence_len, hidden_dim)
# print("lstm_output.shape: ", lstm_out.shape)
output = self.fc(lstm_out) #(batch_size*sequence_len, output_size)
# print("output.shape: ", output.shape)
# reshape into (batch_size, seq_length, output_size)
output = output.view(batch_size, -1, self.output_size) #(batch_size, sequence_len, output_size)
# print("output.shape: ", output.shape)
#get last batch
out = output[:, -1] #(batch_size, output_size)
# print("out.shape: ", out.shape)
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if(train_on_gpu):
rnn.cuda()
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
h = tuple([each.data for each in hidden])
rnn.zero_grad()
output, h = rnn(inp, h)
loss = criterion(output, target.long())
loss.backward()
nn.utils.clip_grad_norm_(rnn.parameters(), max_norm=5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.detach().item(), h
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 20 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.002
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 300
# Hidden Dimension
hidden_dim = 512
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 5.2098567094802855
Epoch: 1/10 Loss: 4.644318076133728
Epoch: 1/10 Loss: 4.480203908920288
Epoch: 1/10 Loss: 4.384352329730987
Epoch: 1/10 Loss: 4.2868800768852235
Epoch: 1/10 Loss: 4.267771987438202
Epoch: 1/10 Loss: 4.2441685366630555
Epoch: 1/10 Loss: 4.167212261676788
Epoch: 1/10 Loss: 4.173274946212769
Epoch: 1/10 Loss: 4.1280538272857665
Epoch: 1/10 Loss: 4.102700160503387
Epoch: 1/10 Loss: 4.094147276878357
Epoch: 1/10 Loss: 4.105357785701751
Epoch: 2/10 Loss: 3.9575602872805162
Epoch: 2/10 Loss: 3.8778745656013487
Epoch: 2/10 Loss: 3.8685070672035216
Epoch: 2/10 Loss: 3.857060043334961
Epoch: 2/10 Loss: 3.8463902869224547
Epoch: 2/10 Loss: 3.858669671535492
Epoch: 2/10 Loss: 3.846140776634216
Epoch: 2/10 Loss: 3.841614939689636
Epoch: 2/10 Loss: 3.8634581270217896
Epoch: 2/10 Loss: 3.829445749759674
Epoch: 2/10 Loss: 3.828117290973663
Epoch: 2/10 Loss: 3.8479499197006226
Epoch: 2/10 Loss: 3.876126905918121
Epoch: 3/10 Loss: 3.7486241174138284
Epoch: 3/10 Loss: 3.6402197880744933
Epoch: 3/10 Loss: 3.64382110786438
Epoch: 3/10 Loss: 3.658150053024292
Epoch: 3/10 Loss: 3.657683217048645
Epoch: 3/10 Loss: 3.65986829996109
Epoch: 3/10 Loss: 3.6714111528396605
Epoch: 3/10 Loss: 3.67494110250473
Epoch: 3/10 Loss: 3.6938516249656677
Epoch: 3/10 Loss: 3.6947651686668395
Epoch: 3/10 Loss: 3.6952608952522277
Epoch: 3/10 Loss: 3.703232437133789
Epoch: 3/10 Loss: 3.703385347366333
Epoch: 4/10 Loss: 3.606662732264227
Epoch: 4/10 Loss: 3.5203455362319946
Epoch: 4/10 Loss: 3.5106461696624756
Epoch: 4/10 Loss: 3.528451536178589
Epoch: 4/10 Loss: 3.532740948200226
Epoch: 4/10 Loss: 3.55157106256485
Epoch: 4/10 Loss: 3.563400938510895
Epoch: 4/10 Loss: 3.5625940647125245
Epoch: 4/10 Loss: 3.579947636604309
Epoch: 4/10 Loss: 3.5858677010536195
Epoch: 4/10 Loss: 3.5631685609817505
Epoch: 4/10 Loss: 3.594450249195099
Epoch: 4/10 Loss: 3.5846909646987917
Epoch: 5/10 Loss: 3.4989527375244895
Epoch: 5/10 Loss: 3.432142825603485
Epoch: 5/10 Loss: 3.4073234243392942
Epoch: 5/10 Loss: 3.40576322221756
Epoch: 5/10 Loss: 3.448205543041229
Epoch: 5/10 Loss: 3.4586821846961975
Epoch: 5/10 Loss: 3.448937037944794
Epoch: 5/10 Loss: 3.4722106990814208
Epoch: 5/10 Loss: 3.4802585258483885
Epoch: 5/10 Loss: 3.49347841835022
Epoch: 5/10 Loss: 3.508174928188324
Epoch: 5/10 Loss: 3.5006230998039247
Epoch: 5/10 Loss: 3.5439432735443117
Epoch: 6/10 Loss: 3.418489476373373
Epoch: 6/10 Loss: 3.3386723246574403
Epoch: 6/10 Loss: 3.3412961316108705
Epoch: 6/10 Loss: 3.3607651977539064
Epoch: 6/10 Loss: 3.3828345370292663
Epoch: 6/10 Loss: 3.384122383117676
Epoch: 6/10 Loss: 3.401136927127838
Epoch: 6/10 Loss: 3.416935426235199
Epoch: 6/10 Loss: 3.4164759802818296
Epoch: 6/10 Loss: 3.4292486023902895
Epoch: 6/10 Loss: 3.4323757557868957
Epoch: 6/10 Loss: 3.436793231010437
Epoch: 6/10 Loss: 3.4763596143722535
Epoch: 7/10 Loss: 3.3665500753674626
Epoch: 7/10 Loss: 3.274177397251129
Epoch: 7/10 Loss: 3.281336480140686
Epoch: 7/10 Loss: 3.296716101169586
Epoch: 7/10 Loss: 3.3409517517089844
Epoch: 7/10 Loss: 3.3118031821250917
Epoch: 7/10 Loss: 3.348753168106079
Epoch: 7/10 Loss: 3.3537993779182433
Epoch: 7/10 Loss: 3.342844934463501
Epoch: 7/10 Loss: 3.362269327163696
Epoch: 7/10 Loss: 3.4000584650039674
Epoch: 7/10 Loss: 3.381922396659851
Epoch: 7/10 Loss: 3.405783137321472
Epoch: 8/10 Loss: 3.320220979284649
Epoch: 8/10 Loss: 3.223982222557068
Epoch: 8/10 Loss: 3.2373883028030397
Epoch: 8/10 Loss: 3.2558323068618775
Epoch: 8/10 Loss: 3.254284800052643
Epoch: 8/10 Loss: 3.2733211941719054
Epoch: 8/10 Loss: 3.3039999499320984
Epoch: 8/10 Loss: 3.306700605392456
Epoch: 8/10 Loss: 3.310398585796356
Epoch: 8/10 Loss: 3.3307930464744566
Epoch: 8/10 Loss: 3.33497225856781
Epoch: 8/10 Loss: 3.368864639759064
Epoch: 8/10 Loss: 3.3773223094940183
Epoch: 9/10 Loss: 3.2773543751436818
Epoch: 9/10 Loss: 3.171738559246063
Epoch: 9/10 Loss: 3.222451404571533
Epoch: 9/10 Loss: 3.207970099925995
Epoch: 9/10 Loss: 3.2387558765411377
Epoch: 9/10 Loss: 3.249077039241791
Epoch: 9/10 Loss: 3.2429378185272215
Epoch: 9/10 Loss: 3.2625793347358703
Epoch: 9/10 Loss: 3.266186451435089
Epoch: 9/10 Loss: 3.283919681072235
Epoch: 9/10 Loss: 3.2922026109695435
Epoch: 9/10 Loss: 3.329367748260498
Epoch: 9/10 Loss: 3.3323066096305847
Epoch: 10/10 Loss: 3.2448489424610925
Epoch: 10/10 Loss: 3.1364177231788637
Epoch: 10/10 Loss: 3.172276804447174
Epoch: 10/10 Loss: 3.1880097351074217
Epoch: 10/10 Loss: 3.184503610610962
Epoch: 10/10 Loss: 3.2146325097084048
Epoch: 10/10 Loss: 3.2194400877952574
Epoch: 10/10 Loss: 3.250775598526001
Epoch: 10/10 Loss: 3.2303061323165894
Epoch: 10/10 Loss: 3.2487965474128724
Epoch: 10/10 Loss: 3.257084801197052
Epoch: 10/10 Loss: 3.2834471111297607
Epoch: 10/10 Loss: 3.300711100101471
Model Trained and Saved
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** The parameters we chosen by trial and error. For chosing the senquence_length I trade-off should be made between convergence time and accuracy. Also, the hidden_dim and n_layers tend to affect the converge time so they were chosen in order to keep a reasonable training time. The learning rate I set it to 0.002 as trying a higher value, e.g. 0.005 the validation loss performance was poorer and had high variations and even increases, probably due to not able to determine correctly the local minima. Lower values for learning rate, e.g. 0.001 hurt the training time. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
print(sequence_length)
print(pad_value)
print(prime_id)
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
print (current_seq)
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq.cpu(), -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
20
21388
8
[[21388 21388 21388 21388 21388 21388 21388 21388 21388 21388 21388 21388
21388 21388 21388 21388 21388 21388 21388 8]]
jerry: uttered, and knocks mail)
jerry:(to jerry) hey, hey, you want me to be ensconced?
jerry: yeah.
kramer: well, you don't know how to work.(jerry and kramer both worked out of his ear and goes to the kitchen.)
jerry: well, i don't know.
george:(to george) you know, it's not that easy, huh? i don't want to know that i was just saying something about it.
elaine:(to elaine) hey!(kramer is speechless)
kramer:(getting to leave) oh.
kramer: hey, i got news with that woman, and i was wondering if we want to go back to the end of the movie, we have a deal.
george: what is this?
jerry: well, i'm going to do this.
george: oh, i can't believe it sounds great.
george: i mean, i can't believe you were going to have a little too strong to me.
george: well i think it's an emergency band.
jerry: i don't want it, jerry.
kramer: hey buddy!
jerry:(to the phone) hello. hi, jerry.
george:(quietly) hey! hey, you gotta go back to work.
george:(smiling) well, you know, i'm sorry if you can go.
elaine: i mean you don't think that you are? what happened to the operation?
kramer: well, it was a little bit.
jerry: well, i don't want to talk about it.
elaine: what?
jerry: you know, i can't go out with you.(to jerry) so, you were supposed to get a little bit of a bitch, you know, i know, i was just wondering if you can get it all over your head, and i'll see ya, i don't know what you say. i don't even know what happened to
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
The TV Script is Not PerfectIt's ok if the TV script doesn't make perfect sense. It should look like alternating lines of dialogue, here is one such example of a few generated lines. Example generated script>jerry: what about me?>>jerry: i don't have to wait.>>kramer:(to the sales table)>>elaine:(to jerry) hey, look at this, i'm a good doctor.>>newman:(to elaine) you think i have no idea of this...>>elaine: oh, you better take the phone, and he was a little nervous.>>kramer:(to the phone) hey, hey, jerry, i don't want to be a little bit.(to kramer and jerry) you can't.>>jerry: oh, yeah. i don't even know, i know.>>jerry:(to the phone) oh, i know.>>kramer:(laughing) you know...(to jerry) you don't know.You can see that there are multiple characters that say (somewhat) complete sentences, but it doesn't have to be perfect! It takes quite a while to get good results, and often, you'll have to use a smaller vocabulary (and discard uncommon words), or get more data. The Seinfeld dataset is about 3.4 MB, which is big enough for our purposes; for script generation you'll want more than 1 MB of text, generally. Submitting This ProjectWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_tv_script_generation.ipynb" and save another copy as an HTML file by clicking "File" -> "Download as.."->"html". Include the "helper.py" and "problem_unittests.py" files in your submission. Once you download these files, compress them into one zip file for submission.
###Code
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
from tqdm.auto import tqdm
# Check if running in colab.research.google.com
try:
import google.colab
IN_COLAB = True
print('Running in Google Colab!')
except:
IN_COLAB = False
# Download and extract files to colab
if IN_COLAB:
!mkdir -p data
!wget -nc -q https://github.com/joaopamaral/deep-learning-v2-pytorch/raw/master/project-tv-script-generation/data/Seinfeld_Scripts.txt -P data
!wget -nc -q https://github.com/joaopamaral/deep-learning-v2-pytorch/raw/master/project-tv-script-generation/helper.py
!wget -nc -q https://github.com/joaopamaral/deep-learning-v2-pytorch/raw/master/project-tv-script-generation/problem_unittests.py
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
from collections import Counter
word_counts = Counter(text)
# sorting the words from most to least frequent in text occurrence
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
# create int_to_vocab dictionaries
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
lookup_dict = {
'.': '||Period||',
',': '||Comma||',
'"': '||QuotationMark||',
';': '||Semicolon||',
'!': '||ExclamationMark||',
'?': '||QuestionMark||',
'(': '||LeftParentheses||',
')': '||RightParentheses||',
'-': '||Dash||',
'\n': '||Return||'
}
return lookup_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
features = []
targets = []
for idx in range(0, len(words)-sequence_length):
features.append(words[idx:idx+sequence_length])
targets.append(words[idx+sequence_length])
data = TensorDataset(torch.LongTensor(features), torch.LongTensor(targets))
data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size, shuffle=True)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
print(iter(batch_data(int_text, 5, 10)).next())
###Output
[tensor([[ 38, 6, 90, 3, 152],
[ 21, 76, 6208, 10, 67],
[ 0, 7, 35, 352, 45],
[ 0, 13, 31, 45, 1096],
[ 0, 0, 7, 5, 27],
[ 1, 93, 17496, 1, 1],
[ 134, 163, 1, 0, 0],
[ 6, 16061, 21, 6, 278],
[ 2304, 1, 0, 0, 1514],
[ 60, 12, 0, 0, 13]]), tensor([ 15, 98, 87, 38, 604, 1, 16, 20, 18, 1804])]
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[16, 17, 18, 19, 20],
[37, 38, 39, 40, 41],
[42, 43, 44, 45, 46],
[34, 35, 36, 37, 38],
[ 4, 5, 6, 7, 8],
[11, 12, 13, 14, 15],
[ 8, 9, 10, 11, 12],
[ 1, 2, 3, 4, 5],
[44, 45, 46, 47, 48],
[18, 19, 20, 21, 22]])
torch.Size([10])
tensor([21, 42, 47, 39, 9, 16, 13, 6, 49, 23])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.n_layers = n_layers
self.hidden_dim = hidden_dim
self.output_size = output_size
# define model layers
# define embedding layers for input and output words
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(input_size=embedding_dim, hidden_size=hidden_dim,
num_layers=n_layers, dropout=dropout, batch_first=True)
# Initialize both embedding tables with uniform distribution
self.embedding.weight.data.uniform_(-1, 1)
self.dropout = nn.Dropout(dropout)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
# embeddings and lstm_out
embeds = self.embedding(nn_input)
lstm_output, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
output = self.dropout(lstm_output)
output = self.fc(output)
# reshape into (batch_size, seq_length, output_size)
output = output.view(batch_size, -1, self.output_size)
# get last batch
output = output[:, -1]
# return one batch of output word scores and the hidden state
return output, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
if (train_on_gpu):
hidden = hidden[0].cuda(), hidden[1].cuda()
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
hidden = tuple([each.data for each in hidden])
# move data to GPU, if available
if(train_on_gpu):
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
# zero accumulated gradients
rnn.zero_grad()
# get the output from the model
output, hidden = rnn(inp, hidden)
# calculate the loss and perform backprop
loss = criterion(output.squeeze(), target.long())
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(tqdm(train_loader, desc=f'T{epoch_i}'), 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 60 # of words in a sequence
# Batch Size
batch_size = 256
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 4
# Learning Rate
learning_rate = 0.0001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = len(vocab_to_int)
# Embedding Dimension
embedding_dim = 512
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 200
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 4 epoch(s)...
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here) --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
_____no_output_____
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
The TV Script is Not PerfectIt's ok if the TV script doesn't make perfect sense. It should look like alternating lines of dialogue, here is one such example of a few generated lines. Example generated script>jerry: what about me?>>jerry: i don't have to wait.>>kramer:(to the sales table)>>elaine:(to jerry) hey, look at this, i'm a good doctor.>>newman:(to elaine) you think i have no idea of this...>>elaine: oh, you better take the phone, and he was a little nervous.>>kramer:(to the phone) hey, hey, jerry, i don't want to be a little bit.(to kramer and jerry) you can't.>>jerry: oh, yeah. i don't even know, i know.>>jerry:(to the phone) oh, i know.>>kramer:(laughing) you know...(to jerry) you don't know.You can see that there are multiple characters that say (somewhat) complete sentences, but it doesn't have to be perfect! It takes quite a while to get good results, and often, you'll have to use a smaller vocabulary (and discard uncommon words), or get more data. The Seinfeld dataset is about 3.4 MB, which is big enough for our purposes; for script generation you'll want more than 1 MB of text, generally. Submitting This ProjectWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_tv_script_generation.ipynb" and save another copy as an HTML file by clicking "File" -> "Download as.."->"html". Include the "helper.py" and "problem_unittests.py" files in your submission. Once you download these files, compress them into one zip file for submission.
###Code
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
import os
os.environ['CUDA_LAUNCH_BLOCKING'] = '1'
import torch
torch.cuda.set_device(0)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_count = dict(Counter(text).most_common())
# list of words sorted in popularity
sorted_words = list(word_count.keys())
word_dict = {} # a dictionary that translates words into integers
for idx, word in enumerate(sorted_words):
word_dict[word] = idx + 1 # 'infrequent' labels
return (word_dict, {v: k for k, v in word_dict.items()})
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
punct_dict = {}
punct_dict['.'] = 'Period'
punct_dict[','] = 'Comma'
punct_dict['"'] = 'Quotation_Mark'
punct_dict[';'] = 'Semicolon'
punct_dict['!'] = 'Exclamation_mark'
punct_dict['?'] = 'Question_mark'
punct_dict['('] = 'Left_Parentheses'
punct_dict[')'] = 'Right_Parentheses'
punct_dict['-'] = 'Dash'
punct_dict['\n'] = 'Return'
return punct_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
# Learn what int_text is! The transformation of text into int_number by vocab_to_int
len(int_text), type(int_text)
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
print(train_on_gpu)
###Output
True
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
print(type(words), len(words), sequence_length, words[:50])
# getting the correct rows x cols shape
n_rows = len(words)-sequence_length-1
feature_tensors = np.zeros((n_rows, sequence_length), dtype=int)
target_tensors = np.zeros(n_rows, dtype=int)
# for each review, I grab that review and
for i in range(n_rows):
feature_tensors[i,:] = np.array(words[i:i+sequence_length])
target_tensors[i] = words[i+sequence_length]
train_data = TensorDataset(torch.from_numpy(feature_tensors), torch.from_numpy(target_tensors) )
dataloader = DataLoader(train_data, shuffle=True, batch_size=batch_size)
return dataloader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
<class 'range'> 50 5 range(0, 50)
torch.Size([10, 5])
tensor([[36, 37, 38, 39, 40],
[37, 38, 39, 40, 41],
[41, 42, 43, 44, 45],
[42, 43, 44, 45, 46],
[11, 12, 13, 14, 15],
[15, 16, 17, 18, 19],
[26, 27, 28, 29, 30],
[40, 41, 42, 43, 44],
[23, 24, 25, 26, 27],
[13, 14, 15, 16, 17]])
torch.Size([10])
tensor([41, 42, 46, 47, 16, 20, 31, 45, 28, 18])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define all layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
self.dropout = nn.Dropout(0.25)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
# embeddings and lstm_out
x = nn_input.long()
embeds = self.embedding(x)
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# fully-connected layer
out = self.fc(lstm_out)
# reshape into (batch_size, seq_length, output_size)
out = out.view(batch_size, -1, self.output_size)
out = out[:, -1] # get last batch of labels
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if(train_on_gpu):
inputs, labels = inp.cuda(), target.cuda()
else:
inputs, labels = inp, target
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
hidden = tuple([each.data for each in hidden])
# zero accumulated gradients
rnn.zero_grad()
# get the output from the model
output, hidden = rnn(inputs, hidden)
loss = criterion(output, labels)
loss.backward()
# clip_grad_norm helps prevent the exploding gradient problem in RNNs / LSTMs.
clip=5 # gradient clipping
nn.utils.clip_grad_norm_(rnn.parameters(), clip)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return float(loss.data), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
try:
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
except RuntimeError:
print(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of unique tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 32 # of words in a sequence
# Batch Size
batch_size = 64
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 7
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(set(vocab_to_int))
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 256
# Hidden Dimension
hidden_dim = 512
# Number of LSTM Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
print(rnn)
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
RNN(
(embedding): Embedding(21384, 256)
(lstm): LSTM(256, 512, num_layers=2, batch_first=True, dropout=0.5)
(dropout): Dropout(p=0.25, inplace=False)
(fc): Linear(in_features=512, out_features=21384, bias=True)
)
Training for 7 epoch(s)...
Epoch: 1/7 Loss: 5.440127225399017
Epoch: 1/7 Loss: 4.897754967212677
Epoch: 1/7 Loss: 4.680213342666626
Epoch: 1/7 Loss: 4.530625684738159
Epoch: 1/7 Loss: 4.477496458053589
Epoch: 1/7 Loss: 4.4489430527687075
Epoch: 1/7 Loss: 4.380565386295318
Epoch: 1/7 Loss: 4.35412612247467
Epoch: 1/7 Loss: 4.334784769058228
Epoch: 1/7 Loss: 4.314755368709564
Epoch: 1/7 Loss: 4.274496835231781
Epoch: 1/7 Loss: 4.265003135681153
Epoch: 1/7 Loss: 4.2271694922447205
Epoch: 1/7 Loss: 4.175936364173889
Epoch: 1/7 Loss: 4.173582122802735
Epoch: 1/7 Loss: 4.127317697525024
Epoch: 1/7 Loss: 4.146075783729553
Epoch: 1/7 Loss: 4.097160451412201
Epoch: 1/7 Loss: 4.096870188236236
Epoch: 1/7 Loss: 4.12374263381958
Epoch: 1/7 Loss: 4.050253786087036
Epoch: 1/7 Loss: 4.087054382324219
Epoch: 1/7 Loss: 4.1159306249618535
Epoch: 1/7 Loss: 4.084246746540069
Epoch: 1/7 Loss: 4.060337846279144
Epoch: 1/7 Loss: 4.0439495315551754
Epoch: 1/7 Loss: 4.032072714805603
Epoch: 2/7 Loss: 3.941748465810503
Epoch: 2/7 Loss: 3.8693338651657103
Epoch: 2/7 Loss: 3.8512733607292176
Epoch: 2/7 Loss: 3.8223975238800048
Epoch: 2/7 Loss: 3.822737952232361
Epoch: 2/7 Loss: 3.8429362573623655
Epoch: 2/7 Loss: 3.8815074272155763
Epoch: 2/7 Loss: 3.8553989009857177
Epoch: 2/7 Loss: 3.8375838875770567
Epoch: 2/7 Loss: 3.9019121417999267
Epoch: 2/7 Loss: 3.903789119243622
Epoch: 2/7 Loss: 3.847796236038208
Epoch: 2/7 Loss: 3.8659792437553406
Epoch: 2/7 Loss: 3.8395901260375975
Epoch: 2/7 Loss: 3.8680261654853823
Epoch: 2/7 Loss: 3.8930205755233764
Epoch: 2/7 Loss: 3.841625497817993
Epoch: 2/7 Loss: 3.866253490447998
Epoch: 2/7 Loss: 3.86496945476532
Epoch: 2/7 Loss: 3.878799920082092
Epoch: 2/7 Loss: 3.8804102311134336
Epoch: 2/7 Loss: 3.88853129529953
Epoch: 2/7 Loss: 3.8769692821502684
Epoch: 2/7 Loss: 3.9134617743492126
Epoch: 2/7 Loss: 3.8967726798057556
Epoch: 2/7 Loss: 3.875201427936554
Epoch: 2/7 Loss: 3.914641480445862
Epoch: 3/7 Loss: 3.748516866901536
Epoch: 3/7 Loss: 3.672148921966553
Epoch: 3/7 Loss: 3.649857009410858
Epoch: 3/7 Loss: 3.692138476371765
Epoch: 3/7 Loss: 3.663156278133392
Epoch: 3/7 Loss: 3.679047279834747
Epoch: 3/7 Loss: 3.6694063229560854
Epoch: 3/7 Loss: 3.712371220111847
Epoch: 3/7 Loss: 3.6903881068229674
Epoch: 3/7 Loss: 3.694471092700958
Epoch: 3/7 Loss: 3.704854001045227
Epoch: 3/7 Loss: 3.689842242717743
Epoch: 3/7 Loss: 3.7579155130386352
Epoch: 3/7 Loss: 3.737253586769104
Epoch: 3/7 Loss: 3.72706632900238
Epoch: 3/7 Loss: 3.7433354930877685
Epoch: 3/7 Loss: 3.71257719039917
Epoch: 3/7 Loss: 3.759409801006317
Epoch: 3/7 Loss: 3.729119296550751
Epoch: 3/7 Loss: 3.7226998748779296
Epoch: 3/7 Loss: 3.7610828914642336
Epoch: 3/7 Loss: 3.7970821633338927
Epoch: 3/7 Loss: 3.770445281982422
Epoch: 3/7 Loss: 3.7594774432182314
Epoch: 3/7 Loss: 3.756800744533539
Epoch: 3/7 Loss: 3.7751384620666504
Epoch: 3/7 Loss: 3.7918020520210267
Epoch: 4/7 Loss: 3.6449039628002433
Epoch: 4/7 Loss: 3.5106687302589417
Epoch: 4/7 Loss: 3.5290410652160644
Epoch: 4/7 Loss: 3.5443011288642885
Epoch: 4/7 Loss: 3.544130033016205
Epoch: 4/7 Loss: 3.5624384779930116
Epoch: 4/7 Loss: 3.5637647004127504
Epoch: 4/7 Loss: 3.569939075469971
Epoch: 4/7 Loss: 3.5743988256454466
Epoch: 4/7 Loss: 3.5669960861206054
Epoch: 4/7 Loss: 3.607184049129486
Epoch: 4/7 Loss: 3.6257673621177675
Epoch: 4/7 Loss: 3.604689902305603
Epoch: 4/7 Loss: 3.5941504912376403
Epoch: 4/7 Loss: 3.659016770362854
Epoch: 4/7 Loss: 3.6350736808776856
Epoch: 4/7 Loss: 3.666177755355835
Epoch: 4/7 Loss: 3.629788129329681
Epoch: 4/7 Loss: 3.6585483021736147
Epoch: 4/7 Loss: 3.679232474327087
Epoch: 4/7 Loss: 3.6127443680763243
Epoch: 4/7 Loss: 3.668506244182587
Epoch: 4/7 Loss: 3.6939528884887696
Epoch: 4/7 Loss: 3.647709415435791
Epoch: 4/7 Loss: 3.6739447102546694
Epoch: 4/7 Loss: 3.6843781561851503
Epoch: 4/7 Loss: 3.7024223728179932
Epoch: 5/7 Loss: 3.5740769814326563
Epoch: 5/7 Loss: 3.4174180455207823
Epoch: 5/7 Loss: 3.4536514801979066
Epoch: 5/7 Loss: 3.4686670536994932
Epoch: 5/7 Loss: 3.4594988942146303
Epoch: 5/7 Loss: 3.463566790103912
Epoch: 5/7 Loss: 3.467808780193329
Epoch: 5/7 Loss: 3.476718333721161
Epoch: 5/7 Loss: 3.502764928817749
Epoch: 5/7 Loss: 3.4945299158096312
Epoch: 5/7 Loss: 3.509504997253418
Epoch: 5/7 Loss: 3.5312786660194395
Epoch: 5/7 Loss: 3.510157462120056
Epoch: 5/7 Loss: 3.503564238548279
Epoch: 5/7 Loss: 3.5344916486740114
Epoch: 5/7 Loss: 3.5214456329345705
Epoch: 5/7 Loss: 3.555229040145874
Epoch: 5/7 Loss: 3.5498494777679444
Epoch: 5/7 Loss: 3.5592599000930787
Epoch: 5/7 Loss: 3.555350459575653
Epoch: 5/7 Loss: 3.618158135890961
Epoch: 5/7 Loss: 3.5623470377922057
Epoch: 5/7 Loss: 3.5871895570755004
Epoch: 5/7 Loss: 3.6055741362571716
Epoch: 5/7 Loss: 3.605754216194153
Epoch: 5/7 Loss: 3.5829739995002745
Epoch: 5/7 Loss: 3.5997953104972837
Epoch: 6/7 Loss: 3.4851539391698614
Epoch: 6/7 Loss: 3.3613112268447876
Epoch: 6/7 Loss: 3.368417363166809
Epoch: 6/7 Loss: 3.3763419189453123
Epoch: 6/7 Loss: 3.3974693484306338
Epoch: 6/7 Loss: 3.3874406900405885
Epoch: 6/7 Loss: 3.3713042101860045
Epoch: 6/7 Loss: 3.4227574915885923
Epoch: 6/7 Loss: 3.44297208738327
Epoch: 6/7 Loss: 3.4201958565711976
Epoch: 6/7 Loss: 3.459585627555847
Epoch: 6/7 Loss: 3.4582015042304994
Epoch: 6/7 Loss: 3.421358411312103
Epoch: 6/7 Loss: 3.445003173351288
Epoch: 6/7 Loss: 3.4456550664901733
Epoch: 6/7 Loss: 3.485937143802643
Epoch: 6/7 Loss: 3.4927955327033997
Epoch: 6/7 Loss: 3.4833500928878784
Epoch: 6/7 Loss: 3.5039338874816894
Epoch: 6/7 Loss: 3.4853085083961486
Epoch: 6/7 Loss: 3.4802650952339174
Epoch: 6/7 Loss: 3.5191545062065126
Epoch: 6/7 Loss: 3.52607638835907
Epoch: 6/7 Loss: 3.516192787647247
Epoch: 6/7 Loss: 3.563179894924164
Epoch: 6/7 Loss: 3.505838517189026
Epoch: 6/7 Loss: 3.5872170176506044
Epoch: 7/7 Loss: 3.4303122687695633
Epoch: 7/7 Loss: 3.2928627119064333
Epoch: 7/7 Loss: 3.3060529613494873
Epoch: 7/7 Loss: 3.317435031890869
Epoch: 7/7 Loss: 3.305982421398163
Epoch: 7/7 Loss: 3.327086359500885
Epoch: 7/7 Loss: 3.3422680926322936
Epoch: 7/7 Loss: 3.349820451259613
Epoch: 7/7 Loss: 3.329127327442169
Epoch: 7/7 Loss: 3.3777764086723328
Epoch: 7/7 Loss: 3.4029650230407715
Epoch: 7/7 Loss: 3.3862902793884277
Epoch: 7/7 Loss: 3.38310046005249
Epoch: 7/7 Loss: 3.424765935897827
Epoch: 7/7 Loss: 3.424185378551483
Epoch: 7/7 Loss: 3.4087417187690736
Epoch: 7/7 Loss: 3.4233316488265992
Epoch: 7/7 Loss: 3.414546513557434
Epoch: 7/7 Loss: 3.4155345373153687
Epoch: 7/7 Loss: 3.4409759402275086
Epoch: 7/7 Loss: 3.4624198012351988
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** The vocab_size is the leng of vocab_to_int, which we set up at the beginning.After several trial and errors, I figure out some notice about the set of hyper-parameters.+ The output_size should be similar to the vocab_size for beter performance+ Smaller batch size will help prevent the memory issues+ We have to balance between the accuracy and computational time.For further improvement, we can set up the learning rate to decrease when hitting a plateau for a while. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
jerry: gasoline...
george: oh, you know what i mean? i don't think so. i don't know. i can't believe i can get the hell out of here.
jerry: what are you gonna do?
elaine: i just can't tell him that.
elaine: you mean, 'what do we do?
jerry: i don't know what to do.
elaine: oh, no no, i can't. i can't do that, but i don't want to be.
jerry: no.
kramer: oh, no.
jerry: you don't even know how to do it for you?
george: i can't believe it.
jerry: i don't know, i know.
jerry: i don't know what it was, but i don't know why she is a friend about your parents.
elaine:(laughs) i mean, i'm not a little nervous. i can't do that, i can't do this.
kramer:(to jerry) you can't do it.
morty: hey.
jerry: hi, i'm sorry. i didn't do anything about this.
elaine:(looking down) what happened to the movies?
jerry: i was in bed for you.
george: you know i didn't even want a little secret.
kramer: i know, but you should take a look. i think i can do that, but, if i have to do a lot better, but i'm not really sure that i have a lot of time, i can't find that, but i can't stand myself.
kramer: well, you know, i think i'm getting rid of it for you.
jerry:(sarcastic) oh, that's the worst.
elaine:(pause) well, i don't know.
jerry: i can't believe that.
george:(pause) i think it's not good.
elaine: oh. oh, yeah!
jerry: what do you want to get?
george: i don't
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_counts = Counter(text)
# sorting the words from most to least frequent in text occurrence
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
# create int_to_vocab dictionaries
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return {
".": "||period||",
",": "||comma||",
"\"": "||quotation_mark||",
";": "||semicolon||",
"!": "||exclamation_mark||",
"?": "||question_mark||",
"(": "||left_parantheses",
")": "||right_paratheses||",
"-": "||dash||",
"\n": "||return||"
}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
from torch import Tensor
import numpy as np
def get_sequences(words, sequence_length):
features = []
targets = []
for index, word in enumerate(words):
if (index + sequence_length) < len(words):
sequence = words[index : index + sequence_length]
target = words[index + sequence_length]
features.append(sequence)
targets.append(target)
else:
break
return features, targets
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
features, targets = get_sequences(words, sequence_length)
data = TensorDataset(Tensor(features), Tensor(targets))
return torch.utils.data.DataLoader(data, batch_size=batch_size)
def test_get_sequences():
words = [1, 2, 3, 4, 5, 6, 7]
sequence_length = 4
expected_features = [
[1, 2, 3, 4],
[2, 3, 4, 5],
[3, 4, 5, 6]
]
expected_targets = [5, 6, 7]
features, targets = get_sequences(words, sequence_length)
assert(expected_targets == targets)
assert(expected_features == features)
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
test_get_sequences()
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
torch.Size([10, 5])
tensor([[ 0., 1., 2., 3., 4.],
[ 1., 2., 3., 4., 5.],
[ 2., 3., 4., 5., 6.],
[ 3., 4., 5., 6., 7.],
[ 4., 5., 6., 7., 8.],
[ 5., 6., 7., 8., 9.],
[ 6., 7., 8., 9., 10.],
[ 7., 8., 9., 10., 11.],
[ 8., 9., 10., 11., 12.],
[ 9., 10., 11., 12., 13.]])
torch.Size([10])
tensor([ 5., 6., 7., 8., 9., 10., 11., 12., 13., 14.])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# embedding and LSTM layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,
dropout=dropout, batch_first=True)
# linear layer
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
batch_size = nn_input.size(0)
# embeddings and lstm_out
nn_input = nn_input.long()
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
# stack the outputs of the lstm
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
out = self.fc(lstm_out)
# reshape into (batch_size, seq_length, output_size)
out = out.view(batch_size, -1, self.output_size)
# get last batch
out = out[:, -1]
# return output and hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Create two new tensors with sizes n_layers x batch_size x hidden_dim,
# initialized to zero, for hidden state and cell state of LSTM
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
clip = 5 # gradient clipping
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
if(train_on_gpu):
inp, target = inp.cuda(), target.cuda()
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
hidden = tuple([each.data for each in hidden])
# zero accumulated gradients
rnn.zero_grad()
# get the output from the model
output, hidden = rnn(inp, hidden)
# calculate the loss and perform backprop
loss = criterion(output.squeeze(), target.long())
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(rnn.parameters(), clip)
optimizer.step()
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 200
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 4
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 800
# Hidden Dimension
hidden_dim = 512
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 4 epoch(s)...
Epoch: 1/4 Loss: 5.181237701892853
Epoch: 1/4 Loss: 4.536556556224823
Epoch: 1/4 Loss: 4.383112089633942
Epoch: 1/4 Loss: 4.399640179634094
Epoch: 1/4 Loss: 4.28509246635437
Epoch: 1/4 Loss: 4.174441674232483
Epoch: 1/4 Loss: 4.196460688591004
Epoch: 1/4 Loss: 4.273359351158142
Epoch: 2/4 Loss: 4.083352662374576
Epoch: 2/4 Loss: 3.812438132762909
Epoch: 2/4 Loss: 3.768332154750824
Epoch: 2/4 Loss: 3.84101727104187
Epoch: 2/4 Loss: 3.773102207183838
Epoch: 2/4 Loss: 3.692211805343628
Epoch: 2/4 Loss: 3.740818433761597
Epoch: 2/4 Loss: 3.811843285083771
Epoch: 3/4 Loss: 3.729530919343233
Epoch: 3/4 Loss: 3.5639186272621153
Epoch: 3/4 Loss: 3.5064866938591
Epoch: 3/4 Loss: 3.6016391167640687
Epoch: 3/4 Loss: 3.5456644682884217
Epoch: 3/4 Loss: 3.4586289110183714
Epoch: 3/4 Loss: 3.509878888130188
Epoch: 3/4 Loss: 3.5734462289810183
Epoch: 4/4 Loss: 3.508139732480049
Epoch: 4/4 Loss: 3.3647750992774963
Epoch: 4/4 Loss: 3.327261540412903
Epoch: 4/4 Loss: 3.4105386810302734
Epoch: 4/4 Loss: 3.3607396955490114
Epoch: 4/4 Loss: 3.3061361818313597
Epoch: 4/4 Loss: 3.348609555721283
Epoch: 4/4 Loss: 3.383381802558899
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here)First, I used some initial params based on the sentiment analysis project and some research. I started training the model and I had a problem with the loss not decreasing good enough and oscillating.I tried different values for the learning rate, and ended up from 0.1 to 0.001. The loss seemed to decrease better, but still not enough. I tried different values for the other params, but that didn't help.After more research, I removed the dropout from the model and changed vocab_size from `len(vocab_to_int) + 10` to `len(vocab_to_int)`. I thought I should also include the puctuation tokens, but they are already included. This improved the training a lot.Next, I tried different values for embedding_dim, hidden_dim, seq_length, but didn't see much difference. The change that made a big impact was changing batch_size from 20 to 80. I tried with smaller sizes for embedding_dim and hidden_dim, but didn't perform well and ended up with the current ones. I increased batch_size even further to 200 and the model was finally training well. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:41: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____
###Markdown
Review comments1. You could look to apply what you've learned to the following problems: - Generate your own Bach music using like [DeepBach](https://arxiv.org/pdf/1612.01010.pdf). - Predict seizures in intracranial EEG recordings on [Kaggle](https://www.kaggle.com/c/seizure-prediction).2. token_lookup: Here's a good resource discussing more preprocessing steps that you can try: - [Preprocessing text before using an RNN](https://datascience.stackexchange.com/questions/11402/preprocessing-text-before-use-rnn)3. batch_data: Overall, good work implementing the batch_data function. You may also choose to create a [generator](https://wiki.python.org/moin/Generators) that batches data similarly but returns x and y batches using yield. A generator allows you to create a function that behaves like an iterator in a fast and clean approach and they do not store their contents in memory.4. RNN network: - The output size is correct. It should be set to the vocab size because we want the model to produce a fully generated script equal in size to the script fed to the model. You could vary the output size and re-train the model to see how this impacts the model's performance. - Also, an early stopping callback could also be used to prevent the model from overfitting. See the article below to learn more about an early stopping approach: [Early Stopping with PyTorch to Restrain your Model from Overfitting](https://medium.com/analytics-vidhya/early-stopping-with-pytorch-to-restrain-your-model-from-overfitting-dce6de4081c5) My questions1. label is 1, output is vocab_size, why it still works2. why need to reshape LSTM output,3. why get the last layer of output TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
text_test = 'itical science? i met her the night i did the show in lansing... \n\ngeorge: ha.'
test_words = text_test.split()
t = tuple(set(test_words))
for i,k in enumerate(t):
print(i,k)
###Output
0 in
1 lansing...
2 night
3 did
4 science?
5 met
6 george:
7 i
8 her
9 show
10 ha.
11 the
12 itical
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
words = tuple(set(text)) # set is {}, tuple is (), enumerate() gives each word an index
int_to_vocab = dict(enumerate(words))
vocab_to_int = {word: i for i, word in int_to_vocab.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
d = {
".": "||Period||",
",": "||Comma||",
"\"": "||Quotation_Mark||",
";": "||Semicolon||",
"!": "||Exclamation_mark||",
"?": "||Question_mark||",
"(": "||Left_Parentheses||",
")": "||Right_Parentheses||",
"-": "||Dash||",
"\n": "||Return||",
}
return d
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
device = torch.device("cuda:0" if train_on_gpu else "cpu")
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
# from torch.utils.data import TensorDataset, DataLoader
# def batch_data(words, sequence_length, batch_size):
# """
# Batch the neural network data using DataLoader
# :param words: The word ids of the TV scripts
# :param sequence_length: The sequence length of each batch
# :param batch_size: The size of each batch; the number of sequences in a batch
# :return: DataLoader with batched data
# """
# # TODO: Implement function
# words = np.asarray(words)
# batch_len = sequence_length*batch_size
# n_batches = int(len(words) / batch_len)
# words = words [: n_batches*batch_len]
# words = words.reshape((batch_size, -1))
# print('reshape words:', words)
# features = None
# targets = None
# for n in range(0, words.shape[1], sequence_length):
# # The features
# x = words[:, n:n+sequence_length]
# y = np.zeros([batch_size, 1])
# print('words:', words.shape)
# y[:, -1] = words[:, n+sequence_length]
# y = y.reshape(batch_size)
# if features is None:
# features = x
# targets = y
# else:
# features = np.append(features, x, axis=0)
# targets = np.append(targets, y, axis=0)
# # The targets, shifted by one
# features_tensor = torch.from_numpy(features).to(device)
# targets_tensor = torch.from_numpy(targets).to(device)
# data = TensorDataset(features_tensor, targets_tensor)
# data_loader = torch.utils.data.DataLoader(data,
# batch_size=batch_size)
# return data_loader
# # there is no test for this function, but you are encouraged to create
# # print statements and tests of your own
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
words_len = len(words)
words = np.asarray(words)
batch_len = sequence_length*batch_size
features = []
targets = []
for s in range(0, len(words), batch_len):
# The features
for i in range(batch_size):
if s + i +sequence_length >= words_len:
break
features.append(words[s + i :s + i +sequence_length])
targets.append(words[s + i +sequence_length])
# # The targets, shifted by one
features = np.array(features)
targets = np.array(targets)
features_tensor = torch.from_numpy(features).to(device)
targets_tensor = torch.from_numpy(targets).to(device)
data = TensorDataset(features_tensor, targets_tensor)
data_loader = torch.utils.data.DataLoader(data,
batch_size=batch_size)
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
len(test_text)
###Output
_____no_output_____
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.vocab_size = vocab_size
self.output_size = output_size
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.n_layers = n_layers
self.drop_prob = dropout
self.embedding = nn.Embedding(vocab_size, embedding_dim)
# define model layers
self.lstm = nn.LSTM(self.embedding_dim, self.hidden_dim, self.n_layers,
dropout=self.drop_prob, batch_first=True)
## TODO: define a dropout layer
self.dropout = nn.Dropout(self.drop_prob)
## TODO: define the final, fully-connected output layer
self.fc = nn.Linear(self.hidden_dim, self.output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
# return one batch of output word scores and the hidden state
# print('nn_input', nn_input.shape) #[128, 100]
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# print('reshaped lstm_out', lstm_out.shape) # [12800, 512]
# dropout and fully-connected layer
out = self.dropout(lstm_out)
out = self.fc(out)
# print('fc_out', out.shape) #[12800, 21388]
# return the final output and the hidden state
output = out.view(batch_size, -1, self.output_size)
# print('reshape fc_out', output.shape) #[128, 100, 21388]
# get last batch
output = output[:, -1]
# print('last batch fc_out', output.shape) #[128, 21388]
return output, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
# print('batch_size',batch_size)
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
if(train_on_gpu):
inp, target = inp.cuda(), target.cuda()
# TODO: Implement Function
rnn.zero_grad()
hidden = tuple([each.data for each in hidden])
output, h = rnn(inp, hidden)
loss = criterion(output, target)
# print('criterion: output={} , target={}'.format(output.shape, target.shape))
# criterion: output=torch.Size([128, 21388]) , target=torch.Size([128])
# move data to GPU, if available
# perform backpropagation and optimization
loss.backward()
# nn.utils.clip_grad_norm_(net.parameters(), clip)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
loss_np = loss.data.cpu().numpy()
return loss_np.item(), h
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 100 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 20
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int) #21388
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 400
# Hidden Dimension
hidden_dim = 512
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batch_size = 64
train_loader = batch_data(int_text, sequence_length=200, batch_size=batch_size)
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim=600, n_layers=2, dropout=0.5)
print(rnn)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, n_epochs=20, show_every_n_batches=3)
# saving the trained model
helper.save_model('./save/trained_rnn_200_v2', trained_rnn)
print('Model Trained and Saved')
###Output
RNN(
(embedding): Embedding(21388, 400)
(lstm): LSTM(400, 600, num_layers=2, batch_first=True, dropout=0.5)
(dropout): Dropout(p=0.5)
(fc): Linear(in_features=600, out_features=21388, bias=True)
)
Training for 20 epoch(s)...
Epoch: 1/20 Loss: 9.958581924438477
Epoch: 1/20 Loss: 9.706121762593588
Epoch: 1/20 Loss: 7.863609949747722
Epoch: 1/20 Loss: 7.249745051066081
Epoch: 1/20 Loss: 6.732536633809407
Epoch: 1/20 Loss: 6.5185502370198565
Epoch: 1/20 Loss: 7.0281416575113935
Epoch: 1/20 Loss: 6.508562405904134
Epoch: 1/20 Loss: 6.4359086354573565
Epoch: 1/20 Loss: 6.524682680765788
Epoch: 1/20 Loss: 6.63815450668335
Epoch: 1/20 Loss: 5.932420253753662
Epoch: 1/20 Loss: 6.360460599263509
Epoch: 1/20 Loss: 5.806179046630859
Epoch: 1/20 Loss: 6.499349753061931
Epoch: 1/20 Loss: 6.086406389872233
Epoch: 1/20 Loss: 5.316435019175212
Epoch: 1/20 Loss: 5.489927927652995
Epoch: 1/20 Loss: 6.000441869099935
Epoch: 1/20 Loss: 6.3408559163411455
Epoch: 1/20 Loss: 5.85344139734904
Epoch: 1/20 Loss: 5.796311060587565
Epoch: 1/20 Loss: 5.843286991119385
Epoch: 2/20 Loss: 5.800034046173096
Epoch: 2/20 Loss: 5.130236784617106
Epoch: 2/20 Loss: 4.5888746579488116
Epoch: 2/20 Loss: 4.815451939900716
Epoch: 2/20 Loss: 4.586483796437581
Epoch: 2/20 Loss: 4.774575710296631
Epoch: 2/20 Loss: 5.710052649180095
Epoch: 2/20 Loss: 5.263357480367024
Epoch: 2/20 Loss: 5.055178801218669
Epoch: 2/20 Loss: 5.247008482615153
Epoch: 2/20 Loss: 5.555044809977214
Epoch: 2/20 Loss: 5.052340189615886
Epoch: 2/20 Loss: 5.3099416097005205
Epoch: 2/20 Loss: 4.913182894388835
Epoch: 2/20 Loss: 5.5798079172770185
Epoch: 2/20 Loss: 5.5165127118428545
Epoch: 2/20 Loss: 4.669315497080485
Epoch: 2/20 Loss: 4.912684917449951
Epoch: 2/20 Loss: 5.355833530426025
Epoch: 2/20 Loss: 5.667577425638835
Epoch: 2/20 Loss: 5.327864011128743
Epoch: 2/20 Loss: 5.279582341512044
Epoch: 2/20 Loss: 5.300801753997803
Epoch: 3/20 Loss: 5.361390471458435
Epoch: 3/20 Loss: 4.759755929311116
Epoch: 3/20 Loss: 4.274578173955281
Epoch: 3/20 Loss: 4.559749762217204
Epoch: 3/20 Loss: 4.357134660085042
Epoch: 3/20 Loss: 4.5699920654296875
Epoch: 3/20 Loss: 5.476928075154622
Epoch: 3/20 Loss: 4.841865221659343
Epoch: 3/20 Loss: 4.733288923899333
Epoch: 3/20 Loss: 4.87429141998291
Epoch: 3/20 Loss: 5.239358425140381
Epoch: 3/20 Loss: 4.77719783782959
Epoch: 3/20 Loss: 5.019404252370198
Epoch: 3/20 Loss: 4.651350021362305
Epoch: 3/20 Loss: 5.183464050292969
Epoch: 3/20 Loss: 5.1543205579121905
Epoch: 3/20 Loss: 4.385968128840129
Epoch: 3/20 Loss: 4.674489180246989
Epoch: 3/20 Loss: 5.027946472167969
Epoch: 3/20 Loss: 5.337019443511963
Epoch: 3/20 Loss: 5.000161806742351
Epoch: 3/20 Loss: 5.033861955006917
Epoch: 3/20 Loss: 5.0126363436381025
Epoch: 4/20 Loss: 5.161085247993469
Epoch: 4/20 Loss: 4.575662771860759
Epoch: 4/20 Loss: 4.1439298788706465
Epoch: 4/20 Loss: 4.4042402903238935
Epoch: 4/20 Loss: 4.218659162521362
Epoch: 4/20 Loss: 4.417603174845378
Epoch: 4/20 Loss: 5.234302997589111
Epoch: 4/20 Loss: 4.724959214528401
Epoch: 4/20 Loss: 4.6078596115112305
Epoch: 4/20 Loss: 4.693782965342204
Epoch: 4/20 Loss: 5.064415295918782
Epoch: 4/20 Loss: 4.544859886169434
Epoch: 4/20 Loss: 4.830895900726318
Epoch: 4/20 Loss: 4.4529978434244795
Epoch: 4/20 Loss: 4.929648399353027
Epoch: 4/20 Loss: 4.929540475209554
Epoch: 4/20 Loss: 4.192008415857951
Epoch: 4/20 Loss: 4.445526917775472
Epoch: 4/20 Loss: 4.824466705322266
Epoch: 4/20 Loss: 5.1562652587890625
Epoch: 4/20 Loss: 4.769290765126546
Epoch: 4/20 Loss: 4.723865032196045
Epoch: 4/20 Loss: 4.793251673380534
Epoch: 5/20 Loss: 4.91349196434021
Epoch: 5/20 Loss: 4.423328081766765
Epoch: 5/20 Loss: 3.94789989789327
Epoch: 5/20 Loss: 4.252796411514282
Epoch: 5/20 Loss: 4.053062756856282
Epoch: 5/20 Loss: 4.213995615641276
Epoch: 5/20 Loss: 5.100189685821533
Epoch: 5/20 Loss: 4.5287894407908125
Epoch: 5/20 Loss: 4.385985533396403
Epoch: 5/20 Loss: 4.520823955535889
Epoch: 5/20 Loss: 4.864138126373291
Epoch: 5/20 Loss: 4.378387928009033
Epoch: 5/20 Loss: 4.63739013671875
Epoch: 5/20 Loss: 4.245985984802246
Epoch: 5/20 Loss: 4.765323638916016
Epoch: 5/20 Loss: 4.700669765472412
Epoch: 5/20 Loss: 4.048598210016887
Epoch: 5/20 Loss: 4.2476270993550616
Epoch: 5/20 Loss: 4.591201305389404
Epoch: 5/20 Loss: 4.898910840352376
Epoch: 5/20 Loss: 4.647054036458333
Epoch: 5/20 Loss: 4.568993250528972
Epoch: 5/20 Loss: 4.55507230758667
Epoch: 6/20 Loss: 4.710606396198273
Epoch: 6/20 Loss: 4.303759495417277
Epoch: 6/20 Loss: 3.827535311381022
Epoch: 6/20 Loss: 4.1741368770599365
Epoch: 6/20 Loss: 3.95769993464152
Epoch: 6/20 Loss: 4.106383164723714
Epoch: 6/20 Loss: 4.940904299418132
Epoch: 6/20 Loss: 4.300746361414592
Epoch: 6/20 Loss: 4.2599945068359375
Epoch: 6/20 Loss: 4.38616148630778
Epoch: 6/20 Loss: 4.701078097025554
Epoch: 6/20 Loss: 4.229288260142009
Epoch: 6/20 Loss: 4.4513265291849775
Epoch: 6/20 Loss: 4.07342791557312
Epoch: 6/20 Loss: 4.5369517008463545
Epoch: 6/20 Loss: 4.554623126983643
Epoch: 6/20 Loss: 3.866297403971354
Epoch: 6/20 Loss: 4.132340272267659
Epoch: 6/20 Loss: 4.385867118835449
Epoch: 6/20 Loss: 4.7619757652282715
Epoch: 6/20 Loss: 4.446090539296468
Epoch: 6/20 Loss: 4.42699138323466
Epoch: 6/20 Loss: 4.392768939336141
Epoch: 7/20 Loss: 4.605188190937042
Epoch: 7/20 Loss: 4.156962315241496
Epoch: 7/20 Loss: 3.738390843073527
Epoch: 7/20 Loss: 3.968114217122396
Epoch: 7/20 Loss: 3.7904086112976074
Epoch: 7/20 Loss: 3.876730442047119
Epoch: 7/20 Loss: 4.69500732421875
Epoch: 7/20 Loss: 4.190468152364095
Epoch: 7/20 Loss: 4.096696694691976
Epoch: 7/20 Loss: 4.1273103555043535
Epoch: 7/20 Loss: 4.410106976826985
Epoch: 7/20 Loss: 4.045944134394328
Epoch: 7/20 Loss: 4.317123730977376
Epoch: 7/20 Loss: 3.9064788023630777
Epoch: 7/20 Loss: 4.322257200876872
Epoch: 7/20 Loss: 4.3300395011901855
Epoch: 7/20 Loss: 3.7030864556630454
Epoch: 7/20 Loss: 3.8839136759440103
Epoch: 7/20 Loss: 4.182711680730184
Epoch: 7/20 Loss: 4.544197718302409
Epoch: 7/20 Loss: 4.23785416285197
Epoch: 7/20 Loss: 4.195225079854329
Epoch: 7/20 Loss: 4.189998944600423
Epoch: 8/20 Loss: 4.385303497314453
Epoch: 8/20 Loss: 3.9964472452799478
Epoch: 8/20 Loss: 3.6130781968434653
Epoch: 8/20 Loss: 3.8470616340637207
Epoch: 8/20 Loss: 3.7110977172851562
Epoch: 8/20 Loss: 3.727081616719564
Epoch: 8/20 Loss: 4.481795152028401
Epoch: 8/20 Loss: 3.994717597961426
Epoch: 8/20 Loss: 3.91977866490682
Epoch: 8/20 Loss: 3.949684460957845
Epoch: 8/20 Loss: 4.270252386728923
Epoch: 8/20 Loss: 3.8725639979044595
Epoch: 8/20 Loss: 4.074734767278035
Epoch: 8/20 Loss: 3.719568649927775
Epoch: 8/20 Loss: 4.15228796005249
Epoch: 8/20 Loss: 4.120411078135173
Epoch: 8/20 Loss: 3.576140801111857
Epoch: 8/20 Loss: 3.7569658756256104
Epoch: 8/20 Loss: 3.9483041763305664
Epoch: 8/20 Loss: 4.288441737492879
Epoch: 8/20 Loss: 4.012235879898071
Epoch: 8/20 Loss: 4.050031026204427
Epoch: 8/20 Loss: 4.0011264483133955
Epoch: 9/20 Loss: 4.230976402759552
Epoch: 9/20 Loss: 3.8657448291778564
Epoch: 9/20 Loss: 3.4671210447947183
Epoch: 9/20 Loss: 3.688359260559082
Epoch: 9/20 Loss: 3.5587666034698486
Epoch: 9/20 Loss: 3.607822815577189
Epoch: 9/20 Loss: 4.374560674031575
Epoch: 9/20 Loss: 3.7822327613830566
Epoch: 9/20 Loss: 3.7203831672668457
Epoch: 9/20 Loss: 3.7646692593892417
Epoch: 9/20 Loss: 4.088390827178955
Epoch: 9/20 Loss: 3.7527387936909995
Epoch: 9/20 Loss: 3.98057492574056
Epoch: 9/20 Loss: 3.5951809088389077
Epoch: 9/20 Loss: 4.009266535441081
Epoch: 9/20 Loss: 4.040239651997884
Epoch: 9/20 Loss: 3.4113131364186606
Epoch: 9/20 Loss: 3.5709175268809
Epoch: 9/20 Loss: 3.7808525562286377
Epoch: 9/20 Loss: 4.110503355662028
Epoch: 9/20 Loss: 3.8000247478485107
Epoch: 9/20 Loss: 3.790783405303955
Epoch: 9/20 Loss: 3.8263745307922363
Epoch: 10/20 Loss: 3.940158486366272
Epoch: 10/20 Loss: 3.610634962717692
Epoch: 10/20 Loss: 3.38628888130188
Epoch: 10/20 Loss: 3.619947592417399
Epoch: 10/20 Loss: 3.440582513809204
Epoch: 10/20 Loss: 3.521163543065389
Epoch: 10/20 Loss: 4.139325539271037
Epoch: 10/20 Loss: 3.5962241490681968
Epoch: 10/20 Loss: 3.5890185038248696
Epoch: 10/20 Loss: 3.6476603349049888
Epoch: 10/20 Loss: 3.902036984761556
Epoch: 10/20 Loss: 3.5244584878285727
Epoch: 10/20 Loss: 3.808178663253784
Epoch: 10/20 Loss: 3.4422265688578286
Epoch: 10/20 Loss: 3.7875329653422036
Epoch: 10/20 Loss: 3.923575242360433
Epoch: 10/20 Loss: 3.2468807697296143
Epoch: 10/20 Loss: 3.3837295373280845
Epoch: 10/20 Loss: 3.64907177289327
Epoch: 10/20 Loss: 3.9422510464986167
Epoch: 10/20 Loss: 3.5793240070343018
Epoch: 10/20 Loss: 3.6010279655456543
Epoch: 10/20 Loss: 3.558387517929077
Epoch: 11/20 Loss: 3.765770196914673
Epoch: 11/20 Loss: 3.4691654046376548
Epoch: 11/20 Loss: 3.187297979990641
Epoch: 11/20 Loss: 3.494823376337687
Epoch: 11/20 Loss: 3.2687037785847983
Epoch: 11/20 Loss: 3.3666280110677085
Epoch: 11/20 Loss: 3.9034177462259927
Epoch: 11/20 Loss: 3.4081226189931235
Epoch: 11/20 Loss: 3.4614394505818686
Epoch: 11/20 Loss: 3.4412385622660318
Epoch: 11/20 Loss: 3.608020782470703
Epoch: 11/20 Loss: 3.3132993380228677
Epoch: 11/20 Loss: 3.5873115062713623
Epoch: 11/20 Loss: 3.14542547861735
Epoch: 11/20 Loss: 3.6499172846476235
Epoch: 11/20 Loss: 3.5665555000305176
Epoch: 11/20 Loss: 2.9303673108418784
Epoch: 11/20 Loss: 3.2625865936279297
Epoch: 11/20 Loss: 3.4378462632497153
Epoch: 11/20 Loss: 3.888513962427775
Epoch: 11/20 Loss: 3.49186635017395
Epoch: 11/20 Loss: 3.4093669255574546
Epoch: 11/20 Loss: 3.46438201268514
Epoch: 12/20 Loss: 3.574786603450775
Epoch: 12/20 Loss: 3.183721939722697
Epoch: 12/20 Loss: 3.001272122065226
Epoch: 12/20 Loss: 3.239773432413737
Epoch: 12/20 Loss: 3.039099136988322
Epoch: 12/20 Loss: 3.1630152066548667
Epoch: 12/20 Loss: 3.649697224299113
Epoch: 12/20 Loss: 3.1984384854634604
Epoch: 12/20 Loss: 3.117666800816854
Epoch: 12/20 Loss: 3.243520180384318
Epoch: 12/20 Loss: 3.3705769379933677
Epoch: 12/20 Loss: 3.193171183268229
Epoch: 12/20 Loss: 3.2801406383514404
Epoch: 12/20 Loss: 2.986464023590088
Epoch: 12/20 Loss: 3.426698923110962
Epoch: 12/20 Loss: 3.3971763451894126
Epoch: 12/20 Loss: 2.874650319417318
Epoch: 12/20 Loss: 3.000187555948893
Epoch: 12/20 Loss: 3.1743199030558267
Epoch: 12/20 Loss: 3.5641395250956216
Epoch: 12/20 Loss: 3.194021304448446
Epoch: 12/20 Loss: 3.1579459508260093
Epoch: 12/20 Loss: 3.2736295064290366
Epoch: 13/20 Loss: 3.3554237484931946
Epoch: 13/20 Loss: 3.091209967931112
Epoch: 13/20 Loss: 2.868119239807129
Epoch: 13/20 Loss: 3.1162149906158447
Epoch: 13/20 Loss: 2.9672609170277915
Epoch: 13/20 Loss: 3.0157578786214194
Epoch: 13/20 Loss: 3.359011729558309
Epoch: 13/20 Loss: 2.9590441385904946
Epoch: 13/20 Loss: 2.8988802433013916
Epoch: 13/20 Loss: 3.0317410628000894
Epoch: 13/20 Loss: 3.043818473815918
Epoch: 13/20 Loss: 2.9422900676727295
Epoch: 13/20 Loss: 3.083207289377848
Epoch: 13/20 Loss: 2.8259807427724204
Epoch: 13/20 Loss: 3.1831044356028237
Epoch: 13/20 Loss: 3.1320862770080566
Epoch: 13/20 Loss: 2.64082924524943
Epoch: 13/20 Loss: 2.807501236597697
Epoch: 13/20 Loss: 2.965097665786743
Epoch: 13/20 Loss: 3.260507345199585
Epoch: 13/20 Loss: 3.0080211957295737
Epoch: 13/20 Loss: 2.9455227057139077
Epoch: 13/20 Loss: 3.001010020573934
Epoch: 14/20 Loss: 3.1319024562835693
Epoch: 14/20 Loss: 2.864835182825724
Epoch: 14/20 Loss: 2.643784205118815
Epoch: 14/20 Loss: 2.928978761037191
Epoch: 14/20 Loss: 2.742231845855713
Epoch: 14/20 Loss: 2.7589685916900635
Epoch: 14/20 Loss: 3.184190273284912
Epoch: 14/20 Loss: 2.777350425720215
Epoch: 14/20 Loss: 2.6433753172556558
Epoch: 14/20 Loss: 2.9486355781555176
Epoch: 14/20 Loss: 2.759418169657389
Epoch: 14/20 Loss: 2.7255218823750815
Epoch: 14/20 Loss: 2.857006867726644
Epoch: 14/20 Loss: 2.6643308798472085
Epoch: 14/20 Loss: 3.009838422139486
Epoch: 14/20 Loss: 2.932336409886678
Epoch: 14/20 Loss: 2.4252865314483643
Epoch: 14/20 Loss: 2.673818588256836
Epoch: 14/20 Loss: 2.758777062098185
Epoch: 14/20 Loss: 3.041149457295736
Epoch: 14/20 Loss: 2.754335641860962
Epoch: 14/20 Loss: 2.693573792775472
Epoch: 14/20 Loss: 2.811082442601522
Epoch: 15/20 Loss: 2.9645098447799683
Epoch: 15/20 Loss: 2.6276638507843018
Epoch: 15/20 Loss: 2.5081238746643066
Epoch: 15/20 Loss: 2.7691024939219155
Epoch: 15/20 Loss: 2.6209895610809326
Epoch: 15/20 Loss: 2.606226921081543
Epoch: 15/20 Loss: 3.01155956586202
Epoch: 15/20 Loss: 2.521289825439453
Epoch: 15/20 Loss: 2.4819912115732827
Epoch: 15/20 Loss: 2.652308781941732
Epoch: 15/20 Loss: 2.6202286084493003
Epoch: 15/20 Loss: 2.487241586049398
Epoch: 15/20 Loss: 2.669004281361898
Epoch: 15/20 Loss: 2.4880974292755127
Epoch: 15/20 Loss: 2.8563265800476074
Epoch: 15/20 Loss: 2.7375148932139077
Epoch: 15/20 Loss: 2.3363521099090576
Epoch: 15/20 Loss: 2.441721280415853
Epoch: 15/20 Loss: 2.5556556383768716
Epoch: 15/20 Loss: 2.807891527811686
Epoch: 15/20 Loss: 2.579338312149048
Epoch: 15/20 Loss: 2.5047314961751304
Epoch: 15/20 Loss: 2.5898847579956055
Epoch: 16/20 Loss: 2.7055962085723877
Epoch: 16/20 Loss: 2.535599946975708
Epoch: 16/20 Loss: 2.349489212036133
Epoch: 16/20 Loss: 2.5405503114064536
Epoch: 16/20 Loss: 2.348211685816447
Epoch: 16/20 Loss: 2.4864562352498374
Epoch: 16/20 Loss: 2.7471976280212402
Epoch: 16/20 Loss: 2.326981862386068
Epoch: 16/20 Loss: 2.335393031438192
Epoch: 16/20 Loss: 2.5306597550710044
Epoch: 16/20 Loss: 2.382378101348877
Epoch: 16/20 Loss: 2.256469964981079
Epoch: 16/20 Loss: 2.6152642567952475
Epoch: 16/20 Loss: 2.2749523321787515
Epoch: 16/20 Loss: 2.5816116333007812
Epoch: 16/20 Loss: 2.523202101389567
Epoch: 16/20 Loss: 2.0898168881734214
Epoch: 16/20 Loss: 2.1976340611775718
Epoch: 16/20 Loss: 2.3798329830169678
Epoch: 16/20 Loss: 2.4624365170796714
Epoch: 16/20 Loss: 2.2570886611938477
Epoch: 16/20 Loss: 2.2233104705810547
Epoch: 16/20 Loss: 2.3574225902557373
Epoch: 17/20 Loss: 2.3814817070961
Epoch: 17/20 Loss: 2.2473496198654175
Epoch: 17/20 Loss: 2.18154247601827
Epoch: 17/20 Loss: 2.3392560482025146
Epoch: 17/20 Loss: 2.252898613611857
Epoch: 17/20 Loss: 2.2631988525390625
Epoch: 17/20 Loss: 2.4476199944814048
Epoch: 17/20 Loss: 2.033381382624308
Epoch: 17/20 Loss: 2.0798938274383545
Epoch: 17/20 Loss: 2.286619265874227
Epoch: 17/20 Loss: 2.1523786385854087
Epoch: 17/20 Loss: 2.2670957247416177
Epoch: 17/20 Loss: 2.2802541255950928
Epoch: 17/20 Loss: 2.143662691116333
Epoch: 17/20 Loss: 2.4154065450032554
Epoch: 17/20 Loss: 2.360225200653076
Epoch: 17/20 Loss: 1.9934606552124023
Epoch: 17/20 Loss: 2.0037982066472373
Epoch: 17/20 Loss: 2.2459132273991904
Epoch: 17/20 Loss: 2.3470895290374756
Epoch: 17/20 Loss: 2.1696882247924805
Epoch: 17/20 Loss: 2.0657910903294883
Epoch: 17/20 Loss: 2.2231138547261557
Epoch: 18/20 Loss: 2.2304621636867523
Epoch: 18/20 Loss: 2.156540552775065
Epoch: 18/20 Loss: 2.02048397064209
Epoch: 18/20 Loss: 2.286946932474772
Epoch: 18/20 Loss: 2.0845136642456055
Epoch: 18/20 Loss: 2.0657012462615967
Epoch: 18/20 Loss: 2.1726224422454834
Epoch: 18/20 Loss: 1.921269138654073
Epoch: 18/20 Loss: 1.8292070627212524
Epoch: 18/20 Loss: 2.0982075532277427
Epoch: 18/20 Loss: 1.8747754494349163
Epoch: 18/20 Loss: 2.060596466064453
Epoch: 18/20 Loss: 2.1123546759287515
Epoch: 18/20 Loss: 1.9146449168523152
Epoch: 18/20 Loss: 2.072672128677368
Epoch: 18/20 Loss: 2.196497678756714
Epoch: 18/20 Loss: 1.7107895612716675
Epoch: 18/20 Loss: 1.9680112997690837
Epoch: 18/20 Loss: 1.9936397473017375
Epoch: 18/20 Loss: 1.9863993724187214
Epoch: 18/20 Loss: 1.9955387115478516
Epoch: 18/20 Loss: 1.8729010820388794
Epoch: 18/20 Loss: 1.95305597782135
Epoch: 19/20 Loss: 1.907970130443573
Epoch: 19/20 Loss: 1.909160852432251
Epoch: 19/20 Loss: 1.9067612489064534
Epoch: 19/20 Loss: 1.9787201881408691
Epoch: 19/20 Loss: 1.8585262298583984
Epoch: 19/20 Loss: 1.9460563659667969
Epoch: 19/20 Loss: 1.9155720472335815
Epoch: 19/20 Loss: 1.7678314447402954
Epoch: 19/20 Loss: 1.60900882879893
Epoch: 19/20 Loss: 1.8782674074172974
Epoch: 19/20 Loss: 1.749641219774882
Epoch: 19/20 Loss: 1.8779502312342327
Epoch: 19/20 Loss: 1.895182689030965
Epoch: 19/20 Loss: 1.8008026281992595
Epoch: 19/20 Loss: 1.8963688611984253
Epoch: 19/20 Loss: 1.885638952255249
Epoch: 19/20 Loss: 1.6899528503417969
Epoch: 19/20 Loss: 1.710562825202942
Epoch: 19/20 Loss: 1.8041921854019165
Epoch: 19/20 Loss: 1.7710171937942505
Epoch: 19/20 Loss: 1.848591923713684
Epoch: 19/20 Loss: 1.7242599328358967
Epoch: 19/20 Loss: 1.850838343302409
Epoch: 20/20 Loss: 1.7542192041873932
Epoch: 20/20 Loss: 1.746577501296997
Epoch: 20/20 Loss: 1.7158775726954143
Epoch: 20/20 Loss: 1.8904620011647542
Epoch: 20/20 Loss: 1.780837615331014
Epoch: 20/20 Loss: 1.7228582700093586
Epoch: 20/20 Loss: 1.8387525081634521
Epoch: 20/20 Loss: 1.5870659748713176
Epoch: 20/20 Loss: 1.5571142037709553
Epoch: 20/20 Loss: 1.7474223375320435
Epoch: 20/20 Loss: 1.5173194805781047
Epoch: 20/20 Loss: 1.6295450528462727
Epoch: 20/20 Loss: 1.6993798812230427
Epoch: 20/20 Loss: 1.6462156772613525
Epoch: 20/20 Loss: 1.755162000656128
Epoch: 20/20 Loss: 1.6685333251953125
Epoch: 20/20 Loss: 1.4382768869400024
Epoch: 20/20 Loss: 1.5612629652023315
Epoch: 20/20 Loss: 1.630523959795634
Epoch: 20/20 Loss: 1.6356765429178874
Epoch: 20/20 Loss: 1.6279030243555705
Epoch: 20/20 Loss: 1.5642908811569214
Epoch: 20/20 Loss: 1.6505850553512573
Model Trained and Saved
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** 1. Tried sequence_lengths = 100 and 200 and find 200 makes the model converge faster2. Tried several hidden_dim and n_layers, choose these one because training loss is lower and OOM does not happen --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!) Generate script of lenght 100
###Code
trained_rnn_200_v2 = helper.load_model('./save/trained_rnn_200_v2')
# run the cell multiple times to get different results!
gen_length = 100 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn_200_v2, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:49: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Generate script of lenght 200
###Code
trained_rnn_200_v2 = helper.load_model('./save/trained_rnn_200_v2')
# run the cell multiple times to get different results!
gen_length = 200 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn_200_v2, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:49: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Generate script of lenght 400
###Code
trained_rnn_200_v2 = helper.load_model('./save/trained_rnn_200_v2')
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn_200_v2, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:49: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____ |
ML Models/high_low_classification.ipynb | ###Markdown
Setup
###Code
# Import Dependencies.
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import requests
import json
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import MinMaxScaler
from tensorflow.keras.models import load_model
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from joblib import dump, load
# Fetch the data from the API.
listings_json = requests.get("http://127.0.0.1:5000/housingDataAPI/v1.0/listings").json()
# Examine the data.
print(json.dumps(listings_json[0], indent=4, sort_keys=True))
# Create a dataframe to use for our model.
data_df = pd.DataFrame(listings_json)
print(len(data_df))
data_df.head()
###Output
2056
###Markdown
Data Preprocessing
###Code
# Make a copy of the original data frame to modify.
model_df = data_df
# Insert a lot value of 0 for condos and floating homes.
for index, row in model_df.iterrows():
if ("Condo" in row["home_type"]) | ("Floating" in row["home_type"]):
model_df.loc[index, "lot_size"] = 0
else:
pass
# Include only those columns that will be used in the deep learning model.
model_df = model_df.loc[:, ["bathrooms",
"bedrooms",
"built",
"lot_size",
"square_feet",
"home_type",
"high_school",
"zipcode",
"price"]
]
# Drop rows with NaN entries.
model_df.dropna(inplace=True)
# Check the model data.
print(len(model_df))
model_df.head()
# Simplify home types in model_df.
for i in model_df.index:
if "Floating" in model_df.at[i, "home_type"]:
model_df.at[i, "home_type"] = "Floating"
if "Condo" in model_df.at[i, "home_type"]:
model_df.at[i, "home_type"] = "Condo"
if "Single Family" in model_df.at[i, "home_type"]:
model_df.at[i, "home_type"] = "Single Family"
if "Manufactured" in model_df.at[i, "home_type"]:
model_df.at[i, "home_type"] = "Manufactured"
model_df.head()
# Create district df.
school_dict = ({"high_school" : ['Reynolds', 'Parkrose', 'David Douglas', 'Centennial', 'Cleveland',
'Lincoln', 'Madison', 'Jefferson', 'Roosevelt', 'Sunset','Westview', 'Liberty', 'Beaverton',
'Grant', 'Southridge', 'Tigard', 'Wilson', 'Riverdale', 'Lake Oswego', 'Franklin',
'Tualatin', 'Milwaukie', 'Scappoose'], "district" : ['Reynolds', 'Parkrose','David Douglas',
'Centennial', 'Portland Public', 'Portland Public', 'Portland Public', 'Portland Public',
'Portland Public', 'Beaverton', 'Beaverton', 'Hillsboro', 'Beaverton', 'Portland Public',
'Beaverton', 'Tigard-Tualatin', 'Portland Public', 'Riverdale', 'Lake Oswego', 'Portland Public',
'Tigard-Tualatin', 'North Clackamas', 'Scappose']})
district_df = pd.DataFrame(school_dict)
# Merge into model_df.
model_df = pd.merge(model_df, district_df, on="high_school")
# Drop the high_school column.
model_df.drop("high_school", axis=1, inplace=True)
print(len(model_df))
model_df.head()
# # Rank the home_types in order of mean home price.
# home_type = model_df[["price","home_type"]]
# home_typeAVG = home_type.groupby(["home_type"]).mean().sort_values(by=["price"], ascending=False)
# home_typeRanker = home_typeAVG.reset_index(drop=False)
# # Create a dictionary to rank the zipcode for a particular listing.
# home_type_ranker_dict = {}
# for index, row in home_typeRanker.iterrows():
# home_type_ranker_dict[row["home_type"]] = index
# home_type_ranker_dict
# # Create a home_type ranking for each listing.
# model_df["home_type_rank"] = [home_type_ranker_dict[home_type] for home_type in model_df["home_type"]]
# Drop the home_type for each listing.
# model_df.drop("home_type", axis=1, inplace=True)
# model_df.head()
# # Rank the districts in order of mean home price.
# district = model_df[["price","district"]]
# districtAVG = district.groupby(["district"]).mean().sort_values(by=["price"], ascending=False)
# districtRanker = districtAVG.reset_index(drop=False)
# # Create a dictionary to rank the district for a particular listing.
# district_ranker_dict = {}
# for index, row in districtRanker.iterrows():
# district_ranker_dict[row["district"]] = index
# district_ranker_dict
# # Create a district ranking for each listing.
# model_df["district_rank"] = [district_ranker_dict[district] for district in model_df["district"]]
# # Drop the district for each listing.
# model_df.drop("district", axis=1, inplace=True)
# model_df.head()
# # Rank the zipcodes in order of mean home price.
# zipcode = model_df[["price","zipcode"]]
# zipcodeAVG = zipcode.groupby(["zipcode"]).mean().sort_values(by=["price"], ascending=False)
# zipcodeRanker = zipcodeAVG.reset_index(drop=False)
# # Create a dictionary to rank the zipcode for a particular listing.
# zipcode_ranker_dict = {}
# for index, row in zipcodeRanker.iterrows():
# zipcode_ranker_dict[int(row["zipcode"])] = index
# zipcode_ranker_dict
# # Create a zipcode ranking for each listing.
# model_df["zipcode_rank"] = [zipcode_ranker_dict[zipcode] for zipcode in model_df["zipcode"]]
# Drop the zipcode for each listing.
# model_df.drop("zipcode", axis=1, inplace=True)
# model_df.head()
# Bin prices into ten equal length ranges.
model_df["price_range"] = pd.qcut(model_df["price"], 5)
# Drop the original price data.
model_df.drop("price", axis=1, inplace=True)
model_df.head()
# Get dummies for the values in home_type to use in the model.
model_df = pd.get_dummies(model_df, columns=["home_type","district","zipcode"])
model_df.head()
# Assign X (input) and y (target).
X = model_df.drop("price_range", axis=1)
y = model_df["price_range"]
# Split the data into training and testing
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
# Create a MinMaxScaler model and fit it to the training data
X_scaler = MinMaxScaler().fit(X_train)
# Save the scalar.
dump(X_scaler, 'minmax_scaler.bin', compress=True)
# Transform the training and testing data using the X_scaler and y_scaler models.
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
# Label encode the target data.
label_encoder = LabelEncoder()
label_encoder.fit(y_train)
encoded_y_train = label_encoder.transform(y_train)
encoded_y_test = label_encoder.transform(y_test)
# Save the label encoder
dump(label_encoder, 'label_encoder.bin', compress=True)
# Convert encoded labels to one-hot encoding.
y_train_categorical = to_categorical(encoded_y_train)
y_test_categorical = to_categorical(encoded_y_test)
###Output
_____no_output_____
###Markdown
Run Random Forest Classifier
###Code
# Create a random forest classifier, fit to the training data, and score on the testing data.
rf = RandomForestClassifier(n_estimators=1000)
rf = rf.fit(X_train_scaled, y_train_categorical)
print(rf.score(X_test_scaled, y_test_categorical))
# Find the importances of each feature.
feature_names = X.columns
importances = rf.feature_importances_
print(sorted(zip(rf.feature_importances_, feature_names), reverse=True))
###Output
0.5761316872427984
[(0.3039031300131124, 'square_feet'), (0.1502461492277107, 'built'), (0.09013132725169214, 'bathrooms'), (0.08942538655509241, 'lot_size'), (0.07423816081124927, 'bedrooms'), (0.020151117325278307, 'district_Portland Public'), (0.016654539237920558, 'zipcode_97209'), (0.013453692412634733, 'home_type_Single Family'), (0.013447367368685476, 'home_type_Condo'), (0.013424733999524888, 'zipcode_97266'), (0.012452385192592777, 'zipcode_97217'), (0.01220417296667141, 'zipcode_97229'), (0.01117291836353492, 'zipcode_97206'), (0.011092732493363256, 'zipcode_97219'), (0.010220321800881463, 'zipcode_97211'), (0.010070429074895216, 'district_Beaverton'), (0.00991977806692704, 'zipcode_97202'), (0.009229361269160716, 'district_David Douglas'), (0.009229282265180482, 'zipcode_97201'), (0.0086838283343154, 'zipcode_97239'), (0.007821109349351542, 'zipcode_97210'), (0.0067940587391437865, 'zipcode_97203'), (0.006347414933143469, 'zipcode_97213'), (0.006234965066157125, 'zipcode_97212'), (0.006018164265230408, 'zipcode_97236'), (0.005506991846666589, 'zipcode_97218'), (0.005409949842693945, 'district_Reynolds'), (0.0053479519079011086, 'zipcode_97230'), (0.005269073289292912, 'zipcode_97220'), (0.005030736115700315, 'zipcode_97225'), (0.004796913369523958, 'zipcode_97215'), (0.00406685427550804, 'district_Parkrose'), (0.0038286428590160832, 'zipcode_97216'), (0.003759436168158648, 'district_Centennial'), (0.0037399509289997046, 'zipcode_97214'), (0.0034995471260292504, 'zipcode_97221'), (0.0034954578947348452, 'district_Riverdale'), (0.0032951945291798644, 'home_type_Floating'), (0.0031566098333256605, 'zipcode_97233'), (0.002862343853161521, 'zipcode_97232'), (0.002569094764458886, 'zipcode_97205'), (0.002276321660168568, 'zipcode_97223'), (0.0020904269917342497, 'zipcode_97231'), (0.0014562388834992895, 'zipcode_97227'), (0.001025100721998648, 'district_Hillsboro'), (0.00098033844563518, 'district_Tigard-Tualatin'), (0.0007798214587750365, 'zipcode_97224'), (0.0006435681221363556, 'zipcode_97204'), (0.000631887501361326, 'district_Scappose'), (0.0005625464400406923, 'zipcode_97035'), (0.00040442320311863286, 'district_North Clackamas'), (0.00038559876766455125, 'district_Lake Oswego'), (0.0002830445565522987, 'home_type_Manufactured'), (0.0002794082595139021, 'zipcode_97222')]
###Markdown
Create a Deep Learning Model
###Code
# Create a deep learning Sequential model.
deep_model = Sequential()
deep_model.add(Dense(units=100, activation='relu', input_dim=54))
deep_model.add(Dense(units=100, activation='relu'))
deep_model.add(Dense(units=5, activation='softmax'))
# Compile and fit the model.
deep_model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
deep_model.fit(
X_train_scaled,
y_train_categorical,
epochs=100,
shuffle=True,
verbose=2
)
###Output
Train on 1457 samples
Epoch 1/100
1457/1457 - 1s - loss: 1.5528 - accuracy: 0.2896
Epoch 2/100
1457/1457 - 0s - loss: 1.3527 - accuracy: 0.4180
Epoch 3/100
1457/1457 - 0s - loss: 1.1434 - accuracy: 0.5244
Epoch 4/100
1457/1457 - 0s - loss: 1.0061 - accuracy: 0.5806
Epoch 5/100
1457/1457 - 0s - loss: 0.9294 - accuracy: 0.6170
Epoch 6/100
1457/1457 - 0s - loss: 0.8729 - accuracy: 0.6404
Epoch 7/100
1457/1457 - 0s - loss: 0.8524 - accuracy: 0.6342
Epoch 8/100
1457/1457 - 0s - loss: 0.8236 - accuracy: 0.6706
Epoch 9/100
1457/1457 - 0s - loss: 0.8090 - accuracy: 0.6719
Epoch 10/100
1457/1457 - 0s - loss: 0.7960 - accuracy: 0.6644
Epoch 11/100
1457/1457 - 0s - loss: 0.7857 - accuracy: 0.6658
Epoch 12/100
1457/1457 - 0s - loss: 0.7741 - accuracy: 0.6795
Epoch 13/100
1457/1457 - 0s - loss: 0.7677 - accuracy: 0.6822
Epoch 14/100
1457/1457 - 0s - loss: 0.7582 - accuracy: 0.6802
Epoch 15/100
1457/1457 - 0s - loss: 0.7580 - accuracy: 0.6747
Epoch 16/100
1457/1457 - 0s - loss: 0.7358 - accuracy: 0.7008
Epoch 17/100
1457/1457 - 0s - loss: 0.7335 - accuracy: 0.6932
Epoch 18/100
1457/1457 - 0s - loss: 0.7212 - accuracy: 0.6939
Epoch 19/100
1457/1457 - 0s - loss: 0.7147 - accuracy: 0.6946
Epoch 20/100
1457/1457 - 0s - loss: 0.7095 - accuracy: 0.7145
Epoch 21/100
1457/1457 - 0s - loss: 0.7054 - accuracy: 0.7035
Epoch 22/100
1457/1457 - 0s - loss: 0.6994 - accuracy: 0.7028
Epoch 23/100
1457/1457 - 0s - loss: 0.7065 - accuracy: 0.7152
Epoch 24/100
1457/1457 - 0s - loss: 0.7037 - accuracy: 0.7042
Epoch 25/100
1457/1457 - 0s - loss: 0.6885 - accuracy: 0.7138
Epoch 26/100
1457/1457 - 0s - loss: 0.6863 - accuracy: 0.7083
Epoch 27/100
1457/1457 - 0s - loss: 0.6888 - accuracy: 0.7056
Epoch 28/100
1457/1457 - 0s - loss: 0.6825 - accuracy: 0.7220
Epoch 29/100
1457/1457 - 0s - loss: 0.6698 - accuracy: 0.7282
Epoch 30/100
1457/1457 - 0s - loss: 0.6718 - accuracy: 0.7172
Epoch 31/100
1457/1457 - 0s - loss: 0.6718 - accuracy: 0.7186
Epoch 32/100
1457/1457 - 0s - loss: 0.6644 - accuracy: 0.7056
Epoch 33/100
1457/1457 - 0s - loss: 0.6617 - accuracy: 0.7213
Epoch 34/100
1457/1457 - 0s - loss: 0.6555 - accuracy: 0.7316
Epoch 35/100
1457/1457 - 0s - loss: 0.6530 - accuracy: 0.7200
Epoch 36/100
1457/1457 - 0s - loss: 0.6518 - accuracy: 0.7323
Epoch 37/100
1457/1457 - 0s - loss: 0.6558 - accuracy: 0.7076
Epoch 38/100
1457/1457 - 0s - loss: 0.6485 - accuracy: 0.7241
Epoch 39/100
1457/1457 - 0s - loss: 0.6557 - accuracy: 0.7179
Epoch 40/100
1457/1457 - 0s - loss: 0.6411 - accuracy: 0.7275
Epoch 41/100
1457/1457 - 0s - loss: 0.6428 - accuracy: 0.7310
Epoch 42/100
1457/1457 - 0s - loss: 0.6362 - accuracy: 0.7289
Epoch 43/100
1457/1457 - 0s - loss: 0.6319 - accuracy: 0.7351
Epoch 44/100
1457/1457 - 0s - loss: 0.6406 - accuracy: 0.7261
Epoch 45/100
1457/1457 - 0s - loss: 0.6253 - accuracy: 0.7399
Epoch 46/100
1457/1457 - 0s - loss: 0.6274 - accuracy: 0.7344
Epoch 47/100
1457/1457 - 0s - loss: 0.6283 - accuracy: 0.7351
Epoch 48/100
1457/1457 - 0s - loss: 0.6249 - accuracy: 0.7303
Epoch 49/100
1457/1457 - 0s - loss: 0.6237 - accuracy: 0.7275
Epoch 50/100
1457/1457 - 0s - loss: 0.6186 - accuracy: 0.7323
Epoch 51/100
1457/1457 - 0s - loss: 0.6182 - accuracy: 0.7378
Epoch 52/100
1457/1457 - 0s - loss: 0.6156 - accuracy: 0.7371
Epoch 53/100
1457/1457 - 0s - loss: 0.6101 - accuracy: 0.7481
Epoch 54/100
1457/1457 - 0s - loss: 0.6075 - accuracy: 0.7378
Epoch 55/100
1457/1457 - 0s - loss: 0.6111 - accuracy: 0.7364
Epoch 56/100
1457/1457 - 0s - loss: 0.6018 - accuracy: 0.7515
Epoch 57/100
1457/1457 - 0s - loss: 0.6099 - accuracy: 0.7289
Epoch 58/100
1457/1457 - 0s - loss: 0.6042 - accuracy: 0.7378
Epoch 59/100
1457/1457 - 0s - loss: 0.6000 - accuracy: 0.7461
Epoch 60/100
1457/1457 - 0s - loss: 0.5947 - accuracy: 0.7454
Epoch 61/100
1457/1457 - 0s - loss: 0.5946 - accuracy: 0.7522
Epoch 62/100
1457/1457 - 0s - loss: 0.6010 - accuracy: 0.7385
Epoch 63/100
1457/1457 - 0s - loss: 0.5965 - accuracy: 0.7509
Epoch 64/100
1457/1457 - 0s - loss: 0.5917 - accuracy: 0.7515
Epoch 65/100
1457/1457 - 0s - loss: 0.5910 - accuracy: 0.7529
Epoch 66/100
1457/1457 - 0s - loss: 0.5805 - accuracy: 0.7543
Epoch 67/100
1457/1457 - 0s - loss: 0.5782 - accuracy: 0.7577
Epoch 68/100
1457/1457 - 0s - loss: 0.5840 - accuracy: 0.7495
Epoch 69/100
1457/1457 - 0s - loss: 0.5732 - accuracy: 0.7509
Epoch 70/100
1457/1457 - 0s - loss: 0.5740 - accuracy: 0.7481
Epoch 71/100
1457/1457 - 0s - loss: 0.5873 - accuracy: 0.7447
Epoch 72/100
1457/1457 - 0s - loss: 0.5735 - accuracy: 0.7488
Epoch 73/100
1457/1457 - 0s - loss: 0.5697 - accuracy: 0.7543
Epoch 74/100
1457/1457 - 0s - loss: 0.5725 - accuracy: 0.7474
Epoch 75/100
1457/1457 - 0s - loss: 0.5678 - accuracy: 0.7543
Epoch 76/100
1457/1457 - 0s - loss: 0.5681 - accuracy: 0.7577
Epoch 77/100
1457/1457 - 0s - loss: 0.5718 - accuracy: 0.7495
Epoch 78/100
1457/1457 - 0s - loss: 0.5578 - accuracy: 0.7612
Epoch 79/100
1457/1457 - 0s - loss: 0.5639 - accuracy: 0.7529
Epoch 80/100
1457/1457 - 0s - loss: 0.5633 - accuracy: 0.7570
Epoch 81/100
1457/1457 - 0s - loss: 0.5645 - accuracy: 0.7605
Epoch 82/100
1457/1457 - 0s - loss: 0.5534 - accuracy: 0.7701
Epoch 83/100
1457/1457 - 0s - loss: 0.5549 - accuracy: 0.7625
Epoch 84/100
1457/1457 - 0s - loss: 0.5519 - accuracy: 0.7666
Epoch 85/100
1457/1457 - 0s - loss: 0.5523 - accuracy: 0.7646
Epoch 86/100
1457/1457 - 0s - loss: 0.5497 - accuracy: 0.7660
Epoch 87/100
1457/1457 - 0s - loss: 0.5469 - accuracy: 0.7701
Epoch 88/100
1457/1457 - 0s - loss: 0.5429 - accuracy: 0.7694
Epoch 89/100
1457/1457 - 0s - loss: 0.5469 - accuracy: 0.7618
Epoch 90/100
1457/1457 - 0s - loss: 0.5437 - accuracy: 0.7660
Epoch 91/100
1457/1457 - 0s - loss: 0.5320 - accuracy: 0.7728
Epoch 92/100
1457/1457 - 0s - loss: 0.5354 - accuracy: 0.7646
Epoch 93/100
1457/1457 - 0s - loss: 0.5334 - accuracy: 0.7776
Epoch 94/100
1457/1457 - 0s - loss: 0.5312 - accuracy: 0.7749
Epoch 95/100
1457/1457 - 0s - loss: 0.5352 - accuracy: 0.7632
Epoch 96/100
1457/1457 - 0s - loss: 0.5376 - accuracy: 0.7687
Epoch 97/100
1457/1457 - 0s - loss: 0.5300 - accuracy: 0.7721
Epoch 98/100
1457/1457 - 0s - loss: 0.5282 - accuracy: 0.7769
Epoch 99/100
1457/1457 - 0s - loss: 0.5236 - accuracy: 0.7817
Epoch 100/100
1457/1457 - 0s - loss: 0.5214 - accuracy: 0.7728
###Markdown
Quantify our Trained Model
###Code
model_loss, model_accuracy = deep_model.evaluate(X_test_scaled, y_test_categorical, verbose=2)
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
###Output
486/1 - 0s - loss: 1.0845 - accuracy: 0.6584
Loss: 0.9257818293669586, Accuracy: 0.6584362387657166
###Markdown
Make Predictions
###Code
# Use the first 10 test data values to make a prediction and compare it to the actual labels.
encoded_predictions = deep_model.predict_classes(X_test_scaled[:10])
prediction_labels = label_encoder.inverse_transform(encoded_predictions)
print(f"Predicted classes: {prediction_labels}")
print(f"Actual Labels: {list(y_test[:10])}")
###Output
Predicted classes: [Interval(348340.0, 449000.0, closed='right')
Interval(449000.0, 609000.0, closed='right')
Interval(449000.0, 609000.0, closed='right')
Interval(348340.0, 449000.0, closed='right')
Interval(348340.0, 449000.0, closed='right')
Interval(825000.0, 4495000.0, closed='right')
Interval(609000.0, 825000.0, closed='right')
Interval(449000.0, 609000.0, closed='right')
Interval(825000.0, 4495000.0, closed='right')
Interval(449000.0, 609000.0, closed='right')]
Actual Labels: [Interval(123499.999, 348340.0, closed='right'), Interval(449000.0, 609000.0, closed='right'), Interval(348340.0, 449000.0, closed='right'), Interval(123499.999, 348340.0, closed='right'), Interval(449000.0, 609000.0, closed='right'), Interval(825000.0, 4495000.0, closed='right'), Interval(609000.0, 825000.0, closed='right'), Interval(825000.0, 4495000.0, closed='right'), Interval(825000.0, 4495000.0, closed='right'), Interval(449000.0, 609000.0, closed='right')]
###Markdown
Save the trained model
###Code
# Save the model
deep_model.save("housing_model_trained.h5")
###Output
_____no_output_____
###Markdown
Test the saved model, scaler, and label encoder
###Code
# Load the model, scaler and label encoder.
model = load_model("housing_model_trained.h5")
scaler = load("minmax_scaler.bin")
label_encoder = load("label_encoder.bin")
# Input data for testing.
input_data = np.array(np.array([X.iloc[0]]))
X.iloc[0]
encoded_predictions = model.predict_classes(scaler.transform(input_data))
prediction_labels = label_encoder.inverse_transform(encoded_predictions)
print(f"{prediction_labels[0].left}, {prediction_labels[0].right}")
###Output
123499.999, 348340.0
|
intermediate_importing_data_in_python/1_importing_data_from_the_internet.ipynb | ###Markdown
Importing flat files from the web: your turn!You are about to import your first file from the web! The flat file you will import will be `'winequality-red.csv'` from the University of California, Irvine's [Machine Learning repository](http://archive.ics.uci.edu/ml/index.html). The flat file contains tabular data of physiochemical properties of red wine, such as pH, alcohol content and citric acid content, along with wine quality rating.The URL of the file is```'https://s3.amazonaws.com/assets.datacamp.com/production/course_1606/datasets/winequality-red.csv'```After you import it, you'll check your working directory to confirm that it is there and then you'll load it into a `pandas` DataFrame.Instructions- Import the function `urlretrieve` from the subpackage `urllib.request`.- Assign the URL of the file to the variable `url`.- Use the function `urlretrieve()` to save the file locally as `'winequality-red.csv'`.- Execute the remaining code to load `'winequality-red.csv'` in a pandas DataFrame and to print its head.
###Code
# Import package
from urllib.request import urlretrieve
# Import pandas
import pandas as pd
# Assign url of file: url
url = 'https://s3.amazonaws.com/assets.datacamp.com/production/course_1606/datasets/winequality-red.csv'
# Save file locally
urlretrieve(url, 'winequality-red.csv')
# Read file into a DataFrame and print its head
df = pd.read_csv('winequality-red.csv', sep=';')
df.head()
###Output
_____no_output_____
###Markdown
Opening and reading flat files from the webYou have just imported a file from the web, saved it locally and loaded it into a DataFrame. If you just wanted to load a file from the web into a DataFrame without first saving it locally, you can do that easily using `pandas`. In particular, you can use the function `pd.read_csv()` with the URL as the first argument and the separator `sep` as the second argument.The URL of the file, once again, is```'https://s3.amazonaws.com/assets.datacamp.com/production/course_1606/datasets/winequality-red.csv'```Instructions- Assign the URL of the file to the variable `url`.- Read file into a DataFrame `df` using `pd.read_csv()`, recalling that the separator in the file is `';'`.- Print the head of the DataFrame `df`.- Execute the rest of the code to plot histogram of the first feature in the DataFrame `df`.
###Code
# Import packages
import matplotlib.pyplot as plt
import pandas as pd
# Assign url of file: url
url = 'https://s3.amazonaws.com/assets.datacamp.com/production/course_1606/datasets/winequality-red.csv'
# Read file into a DataFrame: df
df = pd.read_csv(url, sep=';')
# Print the head of the DataFrame
print(df.head())
# Plot first column of df
pd.DataFrame.hist(df.iloc[:, 0:1])
plt.xlabel('fixed acidity (g(tartaric acid)/dm$^3$)')
plt.ylabel('count')
plt.show()
###Output
fixed acidity volatile acidity citric acid residual sugar chlorides \
0 7.4 0.70 0.00 1.9 0.076
1 7.8 0.88 0.00 2.6 0.098
2 7.8 0.76 0.04 2.3 0.092
3 11.2 0.28 0.56 1.9 0.075
4 7.4 0.70 0.00 1.9 0.076
free sulfur dioxide total sulfur dioxide density pH sulphates \
0 11.0 34.0 0.9978 3.51 0.56
1 25.0 67.0 0.9968 3.20 0.68
2 15.0 54.0 0.9970 3.26 0.65
3 17.0 60.0 0.9980 3.16 0.58
4 11.0 34.0 0.9978 3.51 0.56
alcohol quality
0 9.4 5
1 9.8 5
2 9.8 5
3 9.8 6
4 9.4 5
###Markdown
Importing non-flat files from the webCongrats! You've just loaded a flat file from the web into a DataFrame without first saving it locally using the `pandas` function `pd.read_csv()`. This function is super cool because it has close relatives that allow you to load all types of files, not only flat ones. In this interactive exercise, you'll use `pd.read_excel()` to import an Excel spreadsheet.The URL of the spreadsheet is```'http://s3.amazonaws.com/assets.datacamp.com/course/importing_data_into_r/latitude.xls'```Your job is to use `pd.read_excel()` to read in all of its sheets, print the sheet names and then print the head of the first sheet _using its name, not its index_.Note that the output of `pd.read_excel()` is a Python dictionary with sheet names as keys and corresponding DataFrames as corresponding values.Instructions- Assign the URL of the file to the variable `url`.- Read the file in `url` into a dictionary `xls` using `pd.read_excel()` recalling that, in order to import all sheets you need to pass `None` to the argument `sheet_name`.- Print the names of the sheets in the Excel spreadsheet; these will be the keys of the dictionary `xls`.- Print the head of the first sheet _using the sheet name, not the index of the sheet_! The sheet name is `'1700'`.
###Code
# Import package
import pandas as pd
# Assign url of file: url
url = 'http://s3.amazonaws.com/assets.datacamp.com/course/importing_data_into_r/latitude.xls'
# Read in all sheets of Excel file: xls
xls = pd.read_excel(url, sheet_name=None)
# Print the sheetnames
print(xls.keys())
# Print the head of the first sheet (using its name, NOT its index)
print(xls['1700'].head())
###Output
dict_keys(['1700', '1900'])
country 1700
0 Afghanistan 34.565000
1 Akrotiri and Dhekelia 34.616667
2 Albania 41.312000
3 Algeria 36.720000
4 American Samoa -14.307000
###Markdown
Performing HTTP requests in Python using urllibNow that you know the basics behind HTTP GET requests, it's time to perform some of your own. In this interactive exercise, you will ping our very own DataCamp servers to perform a GET request to extract information from the first coding exercise of this course, `"https://campus.datacamp.com/courses/1606/4135?ex=2"`.In the next exercise, you'll extract the HTML itself. Right now, however, you are going to package and send the request and then catch the response.Instructions- Import the functions `urlopen` and `Request` from the subpackage `urllib.request`.- Package the request to the url `"https://campus.datacamp.com/courses/1606/4135?ex=2"` using the function `Request()` and assign it to `request`.- Send the request and catch the response in the variable `response` with the function `urlopen()`.- Run the rest of the code to see the datatype of `response` and to close the connection!
###Code
# Import packages
from urllib.request import urlopen, Request
# Specify the url
url = 'https://campus.datacamp.com/courses/1606/4135?ex=2'
# This packages the request: request
request = Request(url)
# Sends the request and catches the response: response
response = urlopen(request)
# Print the datatype of response
print(type(response))
# Be polite and close the response!
response.close()
###Output
<class 'http.client.HTTPResponse'>
###Markdown
Printing HTTP request results in Python using urllibYou have just packaged and sent a GET request to `"https://campus.datacamp.com/courses/1606/4135?ex=2"` and then caught the response. You saw that such a response is a `http.client.HTTPResponse` object. The question remains: what can you do with this response?Well, as it came from an HTML page, you could _read_ it to extract the HTML and, in fact, such a `http.client.HTTPResponse` object has an associated `read()` method. In this exercise, you'll build on your previous great work to extract the response and print the HTML.Instructions- Send the request and catch the `response` in the variable response with the function `urlopen()`, as in the previous exercise.- Extract the response using the `read()` method and store the result in the variable `html`.- Print the string `html`.- Hit submit to perform all of the above and to close the response: be tidy!
###Code
# Import packages
from urllib.request import urlopen, Request
# Specify the url
url = 'https://campus.datacamp.com/courses/1606/4135?ex=2'
# This packages the request
request = Request(url)
# Sends the request and catches the response: response
response = urlopen(request)
# Extract the response: html
html = response.read()
# Print the html
print(html)
# Be polite and close the response!
response.close()
###Output
b'<!doctype html><html lang="en"><head><link rel="apple-touch-icon-precomposed" sizes="57x57" href="/apple-touch-icon-57x57.png"><link rel="apple-touch-icon-precomposed" sizes="114x114" href="/apple-touch-icon-114x114.png"><link rel="apple-touch-icon-precomposed" sizes="72x72" href="/apple-touch-icon-72x72.png"><link rel="apple-touch-icon-precomposed" sizes="144x144" href="/apple-touch-icon-144x144.png"><link rel="apple-touch-icon-precomposed" sizes="60x60" href="/apple-touch-icon-60x60.png"><link rel="apple-touch-icon-precomposed" sizes="120x120" href="/apple-touch-icon-120x120.png"><link rel="apple-touch-icon-precomposed" sizes="76x76" href="/apple-touch-icon-76x76.png"><link rel="apple-touch-icon-precomposed" sizes="152x152" href="/apple-touch-icon-152x152.png"><link rel="icon" type="image/png" href="/favicon.ico"><link rel="icon" type="image/png" href="/favicon-196x196.png" sizes="196x196"><link rel="icon" type="image/png" href="/favicon-96x96.png" sizes="96x96"><link rel="icon" type="image/png" href="/favicon-32x32.png" sizes="32x32"><link rel="icon" type="image/png" href="/favicon-16x16.png" sizes="16x16"><link rel="icon" type="image/png" href="/favicon-128.png" sizes="128x128"><meta name="application-name" content="DataCamp"><meta name="msapplication-TileColor" content="#FFFFFF"><meta name="msapplication-TileImage" content="/mstile-144x144.png"><meta name="msapplication-square70x70logo" content="/mstile-70x70.png"><meta name="msapplication-square150x150logo" content="/mstile-150x150.png"><meta name="msapplication-wide310x150logo" content="/mstile-310x150.png"><meta name="msapplication-square310x310logo" content="/mstile-310x310.png"><link href="/static/css/main.9ce3aa4a.css" rel="stylesheet"><title data-react-helmet="true">Importing flat files from the web: your turn! | Python</title><link data-react-helmet="true" rel="canonical" href="https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=2"><meta data-react-helmet="true" charset="utf-8"><meta data-react-helmet="true" http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"><meta data-react-helmet="true" name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1"><meta data-react-helmet="true" name="fragment" content="!"><meta data-react-helmet="true" name="keywords" content="R, Python, Data analysis, interactive, learning"><meta data-react-helmet="true" name="description" content="Here is an example of Importing flat files from the web: your turn!: You are about to import your first file from the web! The flat file you will import will be 'winequality-red."><meta data-react-helmet="true" name="twitter:card" content="summary"><meta data-react-helmet="true" name="twitter:site" content="@DataCamp"><meta data-react-helmet="true" name="twitter:title" content="Importing flat files from the web: your turn! | Python"><meta data-react-helmet="true" name="twitter:description" content="Here is an example of Importing flat files from the web: your turn!: You are about to import your first file from the web! The flat file you will import will be 'winequality-red."><meta data-react-helmet="true" name="twitter:creator" content="@DataCamp"><meta data-react-helmet="true" name="twitter:image:src" content="/public/assets/images/var/twitter_share.png"><meta data-react-helmet="true" name="twitter:domain" content="www.datacamp.com"><meta data-react-helmet="true" property="og:title" content="Importing flat files from the web: your turn! | Python"><meta data-react-helmet="true" property="og:image" content="/public/assets/images/var/linkedin_share.png"><meta data-react-helmet="true" name="google-signin-clientid" content="892114885437-01a7plbsu1b2vobuhvnckmmanhb58h3a.apps.googleusercontent.com"><meta data-react-helmet="true" name="google-signin-scope" content="email profile"><meta data-react-helmet="true" name="google-signin-cookiepolicy" content="single_host_origin"><script data-react-helmet="true" async="true" src="https://compliance.datacamp.com/base.js"></script><script data-react-helmet="true">\n var dataLayerContent = {\n gtm_version: 2,\n };\n if (typeof window[\'dataLayer\'] === \'undefined\') {\n window[\'dataLayer\'] = [dataLayerContent];\n } else {\n window[\'dataLayer\'].push(dataLayerContent);\n }\n </script><script async src=\'/cdn-cgi/bm/cv/669835187/api.js\'></script></head><body><script>window.PRELOADED_STATE = "["~#iR",["^ ","n","StateRecord","v",["^ ","backendSession",["~#iOM",["status",["^2",["code","none","text",""]],"isInitSession",false,"message",null]],"boot",["^0",["^ ","n","BootStateRecord","v",["^ ","bootState","PRE_BOOTED","error",null]]],"chapter",["^2",["current",["^2",["badge_uncompleted_url","https://assets.datacamp.com/production/default/badges/missing_unc.png","number",1,"number_of_videos",3,"slug","importing-data-from-the-internet-1","last_updated_on","06/11/2020","title_meta",null,"nb_exercises",12,"free_preview",true,"slides_link","https://s3.amazonaws.com/assets.datacamp.com/production/course_1606/slides/chapter1.pdf","title","Importing data from the Internet","xp",1050,"id",4135,"exercises",["~#iL",[["^2",["type","VideoExercise","title","Importing flat files from the web","aggregate_xp",50,"number",1,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=1"]],["^2",["type","NormalExercise","title","Importing flat files from the web: your turn!","aggregate_xp",100,"number",2,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=2"]],["^2",["type","NormalExercise","title","Opening and reading flat files from the web","aggregate_xp",100,"number",3,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=3"]],["^2",["type","NormalExercise","title","Importing non-flat files from the web","aggregate_xp",100,"number",4,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=4"]],["^2",["type","VideoExercise","title","HTTP requests to import files from the web","aggregate_xp",50,"number",5,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=5"]],["^2",["type","NormalExercise","title","Performing HTTP requests in Python using urllib","aggregate_xp",100,"number",6,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=6"]],["^2",["type","NormalExercise","title","Printing HTTP request results in Python using urllib","aggregate_xp",100,"number",7,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=7"]],["^2",["type","NormalExercise","title","Performing HTTP requests in Python using requests","aggregate_xp",100,"number",8,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=8"]],["^2",["type","VideoExercise","title","Scraping the web in Python","aggregate_xp",50,"number",9,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=9"]],["^2",["type","NormalExercise","title","Parsing HTML with BeautifulSoup","aggregate_xp",100,"number",10,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=10"]],["^2",["type","NormalExercise","title","Turning a webpage into data using BeautifulSoup: getting the text","aggregate_xp",100,"number",11,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=11"]],["^2",["type","NormalExercise","title","Turning a webpage into data using BeautifulSoup: getting the hyperlinks","aggregate_xp",100,"number",12,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=12"]]]],"description","The web is a rich source of data from which you can extract various types of insights and findings. In this chapter, you will learn how to get data from the web, whether it is stored in files or in HTML. You'll also learn the basics of scraping and parsing web data.","badge_completed_url","https://assets.datacamp.com/production/default/badges/missing.png"]]]],"contentAuthorization",["^ "],"course",["^2",["difficulty_level",1,"reduced_outline",null,"marketing_video","","active_image","course-1606-master:506759a234ec905a9377923e00ae7511-20201106185628118","author_field",null,"chapters",["^7",[["^2",["badge_uncompleted_url","https://assets.datacamp.com/production/default/badges/missing_unc.png","number",1,"number_of_videos",3,"slug","importing-data-from-the-internet-1","last_updated_on","06/11/2020","title_meta",null,"nb_exercises",12,"free_preview",true,"slides_link","https://s3.amazonaws.com/assets.datacamp.com/production/course_1606/slides/chapter1.pdf","title","Importing data from the Internet","xp",1050,"id",4135,"exercises",["^7",[["^2",["type","VideoExercise","title","Importing flat files from the web","aggregate_xp",50,"number",1,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=1"]],["^2",["type","NormalExercise","title","Importing flat files from the web: your turn!","aggregate_xp",100,"number",2,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=2"]],["^2",["type","NormalExercise","title","Opening and reading flat files from the web","aggregate_xp",100,"number",3,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=3"]],["^2",["type","NormalExercise","title","Importing non-flat files from the web","aggregate_xp",100,"number",4,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=4"]],["^2",["type","VideoExercise","title","HTTP requests to import files from the web","aggregate_xp",50,"number",5,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=5"]],["^2",["type","NormalExercise","title","Performing HTTP requests in Python using urllib","aggregate_xp",100,"number",6,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=6"]],["^2",["type","NormalExercise","title","Printing HTTP request results in Python using urllib","aggregate_xp",100,"number",7,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=7"]],["^2",["type","NormalExercise","title","Performing HTTP requests in Python using requests","aggregate_xp",100,"number",8,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=8"]],["^2",["type","VideoExercise","title","Scraping the web in Python","aggregate_xp",50,"number",9,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=9"]],["^2",["type","NormalExercise","title","Parsing HTML with BeautifulSoup","aggregate_xp",100,"number",10,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=10"]],["^2",["type","NormalExercise","title","Turning a webpage into data using BeautifulSoup: getting the text","aggregate_xp",100,"number",11,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=11"]],["^2",["type","NormalExercise","title","Turning a webpage into data using BeautifulSoup: getting the hyperlinks","aggregate_xp",100,"number",12,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=12"]]]],"description","The web is a rich source of data from which you can extract various types of insights and findings. In this chapter, you will learn how to get data from the web, whether it is stored in files or in HTML. You'll also learn the basics of scraping and parsing web data.","badge_completed_url","https://assets.datacamp.com/production/default/badges/missing.png"]],["^2",["badge_uncompleted_url","https://assets.datacamp.com/production/default/badges/missing_unc.png","number",2,"number_of_videos",2,"slug","interacting-with-apis-to-import-data-from-the-web-2","last_updated_on","06/11/2020","title_meta",null,"nb_exercises",9,"free_preview",null,"slides_link","https://s3.amazonaws.com/assets.datacamp.com/production/course_1606/slides/chapter2.pdf","title","Interacting with APIs to import data from the web","xp",650,"id",4136,"exercises",["^7",[["^2",["type","VideoExercise","title","Introduction to APIs and JSONs","aggregate_xp",50,"number",1,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/interacting-with-apis-to-import-data-from-the-web-2?ex=1"]],["^2",["type","PureMultipleChoiceExercise","title","Pop quiz: What exactly is a JSON?","aggregate_xp",50,"number",2,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/interacting-with-apis-to-import-data-from-the-web-2?ex=2"]],["^2",["type","NormalExercise","title","Loading and exploring a JSON","aggregate_xp",100,"number",3,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/interacting-with-apis-to-import-data-from-the-web-2?ex=3"]],["^2",["type","MultipleChoiceExercise","title","Pop quiz: Exploring your JSON","aggregate_xp",50,"number",4,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/interacting-with-apis-to-import-data-from-the-web-2?ex=4"]],["^2",["type","VideoExercise","title","APIs and interacting with the world wide web","aggregate_xp",50,"number",5,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/interacting-with-apis-to-import-data-from-the-web-2?ex=5"]],["^2",["type","PureMultipleChoiceExercise","title","Pop quiz: What's an API?","aggregate_xp",50,"number",6,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/interacting-with-apis-to-import-data-from-the-web-2?ex=6"]],["^2",["type","NormalExercise","title","API requests","aggregate_xp",100,"number",7,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/interacting-with-apis-to-import-data-from-the-web-2?ex=7"]],["^2",["type","NormalExercise","title","JSON\xe2\x80\x93from the web to Python","aggregate_xp",100,"number",8,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/interacting-with-apis-to-import-data-from-the-web-2?ex=8"]],["^2",["type","NormalExercise","title","Checking out the Wikipedia API","aggregate_xp",100,"number",9,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/interacting-with-apis-to-import-data-from-the-web-2?ex=9"]]]],"description","In this chapter, you will gain a deeper understanding of how to import data from the web. You will learn the basics of extracting data from APIs, gain insight on the importance of APIs, and practice extracting data by diving into the OMDB and Library of Congress APIs.","badge_completed_url","https://assets.datacamp.com/production/default/badges/missing.png"]],["^2",["badge_uncompleted_url","https://assets.datacamp.com/production/default/badges/missing_unc.png","number",3,"number_of_videos",2,"slug","diving-deep-into-the-twitter-api","last_updated_on","06/11/2020","title_meta",null,"nb_exercises",8,"free_preview",null,"slides_link","https://s3.amazonaws.com/assets.datacamp.com/production/course_1606/slides/chapter3.pdf","title","Diving deep into the Twitter API","xp",700,"id",4140,"exercises",["^7",[["^2",["type","VideoExercise","title","The Twitter API and Authentication","aggregate_xp",50,"number",1,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/diving-deep-into-the-twitter-api?ex=1"]],["^2",["type","NormalExercise","title","API Authentication","aggregate_xp",100,"number",2,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/diving-deep-into-the-twitter-api?ex=2"]],["^2",["type","NormalExercise","title","Streaming tweets","aggregate_xp",100,"number",3,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/diving-deep-into-the-twitter-api?ex=3"]],["^2",["type","NormalExercise","title","Load and explore your Twitter data","aggregate_xp",100,"number",4,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/diving-deep-into-the-twitter-api?ex=4"]],["^2",["type","NormalExercise","title","Twitter data to DataFrame","aggregate_xp",100,"number",5,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/diving-deep-into-the-twitter-api?ex=5"]],["^2",["type","NormalExercise","title","A little bit of Twitter text analysis","aggregate_xp",100,"number",6,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/diving-deep-into-the-twitter-api?ex=6"]],["^2",["type","NormalExercise","title","Plotting your Twitter data","aggregate_xp",100,"number",7,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/diving-deep-into-the-twitter-api?ex=7"]],["^2",["type","VideoExercise","title","Final Thoughts","aggregate_xp",50,"number",8,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/diving-deep-into-the-twitter-api?ex=8"]]]],"description","In this chapter, you will consolidate your knowledge of interacting with APIs in a deep dive into the Twitter streaming API. You'll learn how to stream real-time Twitter data, and how to analyze and visualize it.","badge_completed_url","https://assets.datacamp.com/production/default/badges/missing.png"]]]],"time_needed",null,"author_image","https://assets.datacamp.com/production/course_1606/author_images/author_image_course_1606_20200310-1-lgdj4c?1583853939","tracks",["^7",[["^2",["path","/tracks/data-analyst-with-python","title_with_subtitle","Data Analyst with Python"]],["^2",["path","/tracks/data-scientist-with-python","title_with_subtitle","Data Scientist with Python"]],["^2",["path","/tracks/personalized-data-scientist-with-python","title_with_subtitle","Data Scientist with Python"]],["^2",["path","/tracks/importing-cleaning-data-with-python","title_with_subtitle","Importing & Cleaning Data with Python"]]]],"runtime_config",null,"lti_only",false,"image_url","https://assets.datacamp.com/production/course_1606/shields/thumb/shield_image_course_1606_20200310-1-17hkmhz?1583853940","topic_id",8,"slug","intermediate-importing-data-in-python","last_updated_on","06/04/2021","paid",true,"collaborators",["^7",[["^2",["avatar_url","https://assets.datacamp.com/users/avatars/000/382/294/square/francis-photo.jpg?1471980001","full_name","Francisco Castro"]]]],"time_needed_in_hours",2,"technology_id",2,"university",null,"archived_at",null,"state","live","author_bio",null,"should_cache",true,"sharing_links",["^2",["twitter","http://bit.ly/1eWTMJh","facebook","http://bit.ly/1iS42Do"]],"instructors",["^7",[["^2",["id",301837,"marketing_biography","Data Scientist at DataCamp","biography","Hugo is a data scientist, educator, writer and podcaster at DataCamp. His main interests are promoting data & AI literacy, helping to spread data skills through organizations and society and doing amateur stand up comedy in NYC. If you want to know what he likes to talk about, definitely check out DataFramed, the DataCamp podcast, which he hosts and produces: https://www.datacamp.com/community/podcast","avatar_url","https://assets.datacamp.com/users/avatars/000/301/837/square/hugoaboutpic.jpg?1493154678","full_name","Hugo Bowne-Anderson","instructor_path","/instructors/hugobowne"]]]],"seo_title","Intermediate Importing Data in Python","title","Intermediate Importing Data in Python","xp",2400,"image_thumbnail_url","https://assets.datacamp.com/production/course_1606/shields/thumb_home/shield_image_course_1606_20200310-1-17hkmhz?1583853940","short_description","Improve your Python data importing skills and learn to work with web and API data.","nb_of_subscriptions",111907,"seo_description","Further improve your Python data importing skills and learn to work with more web and API data.","type","datacamp","link","https://www.datacamp.com/courses/intermediate-importing-data-in-python","id",1606,"datasets",["^7",[["^2",["asset_url","https://assets.datacamp.com/production/repositories/488/datasets/b422ace2fceada7b569e0ba3e8d833fddc684c4d/latitude.xls","name","Latitudes (XLS)"]],["^2",["asset_url","https://assets.datacamp.com/production/repositories/488/datasets/3ef452f83a91556ea4284624b969392c0506fb33/tweets3.txt","name","Tweets"]],["^2",["asset_url","https://assets.datacamp.com/production/repositories/488/datasets/013936d2700e2d00207ec42100d448c23692eb6f/winequality-red.csv","name","Red wine quality"]]]],"description","As a data scientist, you will need to clean data, wrangle and munge it, visualize it, build predictive models and interpret these models. Before you can do so, however, you will need to know how to get data into Python. In the prequel to this course, you learned many ways to import data into Python: from flat files such as .txt and .csv; from files native to other software such as Excel spreadsheets, Stata, SAS, and MATLAB files; and from relational databases such as SQLite and PostgreSQL. In this course, you'll extend this knowledge base by learning to import data from the web and by pulling data from Application Programming Interfaces\xe2\x80\x94 APIs\xe2\x80\x94such as the Twitter streaming API, which allows us to stream real-time tweets.","prerequisites",["^7",[["^2",["path","/courses/introduction-to-importing-data-in-python","title","Introduction to Importing Data in Python"]]]],"original_image_url","https://assets.datacamp.com/production/course_1606/shields/original/shield_image_course_1606_20200310-1-17hkmhz?1583853940","programming_language","python","external_slug","intermediate-importing-data-in-python"]],"exercises",["^2",["current",1,"all",["^7",[["^2",["sample_code","","sct","","aspect_ratio",56.25,"instructions",null,"externalId",990668,"question","","hint",null,"possible_answers",["^7",[]],"runtime_config",null,"number",1,"video_hls",null,"randomNumber",0.9009897811570702,"chapter_id",4135,"assignment",null,"feedbacks",["^7",[]],"attachments",null,"version","v0","title","Importing flat files from the web","xp",50,"language","python","pre_exercise_code","","solution","","type","VideoExercise","id",990668,"projector_key","course_1606_59604c018a6e132016cd26144a12fee0","video_link",null,"key","e36457c7ed","course_id",1606]],["^2",["sample_code","# Import package\\\\nfrom ____ import ____\\\\n\\\\n# Import pandas\\\\nimport pandas as pd\\\\n\\\\n# Assign url of file: url\\\\n\\\\n\\\\n# Save file locally\\\\n\\\\n\\\\n# Read file into a DataFrame and print its head\\\\ndf = pd.read_csv('winequality-red.csv', sep=';')\\\\nprint(df.head())","sct","Ex().has_import(\\\\"urllib.request.urlretrieve\\\\")\\\\nEx().has_import(\\\\"pandas\\\\")\\\\nEx().check_object(\\\\"url\\\\").has_equal_value()\\\\nEx().check_function(\\\\"urllib.request.urlretrieve\\\\").multi(\\\\n check_args(0).has_equal_value(),\\\\n check_args(1).has_equal_value()\\\\n)\\\\nEx().check_correct(\\\\n check_object(\\\\"df\\\\").has_equal_value(),\\\\n check_function(\\\\"pandas.read_csv\\\\").multi(\\\\n check_args(0).has_equal_value(),\\\\n check_args(1).has_equal_value()\\\\n )\\\\n)\\\\nEx().has_printout(0)\\\\nsuccess_msg(\\\\"Awesome!\\\\")\\\\n","instructions","<ul>\\\\n<li>Import the function <code>urlretrieve</code> from the subpackage <code>urllib.request</code>.</li>\\\\n<li>Assign the URL of the file to the variable <code>url</code>.</li>\\\\n<li>Use the function <code>urlretrieve()</code> to save the file locally as <code>'winequality-red.csv'</code>.</li>\\\\n<li>Execute the remaining code to load <code>'winequality-red.csv'</code> in a pandas DataFrame and to print its head to the shell.</li>\\\\n</ul>","externalId",42707,"question","","hint","<ul>\\\\n<li>To import a function <code>y</code> from a subpackage <code>x</code>, execute <code>from x import y</code>.</li>\\\\n<li>This one's a long URL. Make sure you typed it in correctly!</li>\\\\n<li>Pass the <em>url</em> to import (in the <code>url</code> object you defined) as the first argument and the <em>filename</em> for saving the file locally as the second argument to <code>urlretrieve()</code>.</li>\\\\n<li>You don't have to change the code for loading <code>'winequality-red.csv'</code> and printing its head.</li>\\\\n</ul>","possible_answers",["^7",[]],"number",2,"user",["^2",["isHintShown",false,"editorTabs",["^2",["files/script.py",["^2",["title","script.py","isSolution",false,"props",["^2",["active",true,"isClosable",false,"code",null,"extra",["^2",[]]]]]]]],"outputMarkdownTabs",["^2",[]],"markdown",["^2",["titles",["^7",["Knit PDF","Knit HTML"]],"activeTitle","Knit HTML"]],"currentXp",100,"graphicalTabs",["^2",["plot",["^2",["extraClass","animation--flash","title","Plots","props",["^2",["sources",["^7",[]],"currentIndex",0]],"dimension",["^2",["isRealSize",false,"width",1,"height",1]]]],"html",["^2",["extraClass","animation--flash","title","HTML Viewer","props",["^2",["sources",["^7",[]],"currentIndex",0]]]]]],"feedbackMessages",["^7",[]],"lastSubmittedCode",null,"ltiStatus",["^2",[]],"lastSubmitActiveEditorTab",null,"consoleSqlTabs",["^2",["query_result",["^2",["extraClass","","title","query result","props",["^2",["active",true,"isNotView",true,"message","No query executed yet..."]]]]]],"consoleTabs",["^2",["console",["^2",["title","IPython Shell","props",["^2",["active",true]],"dimension",["^2",["cols",400]]]],"slides",["^2",["title","Slides","props",["^2",["active",false]]]]]],"inputMarkdownTabs",["^2",[]],"consoleObjectViewTabs",["^2",[]]]],"randomNumber",0.3505292251150971,"assignment","<p>You are about to import your first file from the web! The flat file you will import will be <code>'winequality-red.csv'</code> from the University of California, Irvine's <a href=\\\\"http://archive.ics.uci.edu/ml/index.html\\\\">Machine Learning repository</a>. The flat file contains tabular data of physiochemical properties of red wine, such as pH, alcohol content and citric acid content, along with wine quality rating.</p>\\\\n<p>The URL of the file is</p>\\\\n<pre><code>'https://s3.amazonaws.com/assets.datacamp.com/production/course_1606/datasets/winequality-red.csv'\\\\n</code></pre>\\\\n<p>After you import it, you'll check your working directory to confirm that it is there and then you'll load it into a <code>pandas</code> DataFrame.</p>","feedbacks",["^7",[]],"attachments",null,"title","Importing flat files from the web: your turn!","xp",100,"language","python","pre_exercise_code","","solution","# Import package\\\\nfrom urllib.request import urlretrieve\\\\n\\\\n# Import pandas\\\\nimport pandas as pd\\\\n\\\\n# Assign url of file: url\\\\nurl = 'https://s3.amazonaws.com/assets.datacamp.com/production/course_1606/datasets/winequality-red.csv'\\\\n\\\\n# Save file locally\\\\nurlretrieve(url, 'winequality-red.csv')\\\\n\\\\n# Read file into a DataFrame and print its head\\\\ndf = pd.read_csv('winequality-red.csv', sep=';')\\\\nprint(df.head())","type","NormalExercise","id",42707]],["^2",["sample_code","# Import packages\\\\nimport matplotlib.pyplot as plt\\\\nimport pandas as pd\\\\n\\\\n# Assign url of file: url\\\\n\\\\n\\\\n# Read file into a DataFrame: df\\\\n\\\\n\\\\n# Print the head of the DataFrame\\\\nprint(____)\\\\n\\\\n# Plot first column of df\\\\npd.DataFrame.hist(df.ix[:, 0:1])\\\\nplt.xlabel('fixed acidity (g(tartaric acid)/dm$^3$)')\\\\nplt.ylabel('count')\\\\nplt.show()\\\\n","sct","Ex().has_import(\\\\"matplotlib.pyplot\\\\")\\\\nEx().has_import(\\\\"pandas\\\\")\\\\nEx().check_object(\\\\"url\\\\").has_equal_value()\\\\nEx().check_correct(\\\\n check_object(\\\\"df\\\\").has_equal_value(),\\\\n check_function(\\\\"pandas.read_csv\\\\").multi(\\\\n check_args(0).has_equal_value(),\\\\n check_args(1).has_equal_value()\\\\n )\\\\n)\\\\nEx().has_printout(0)\\\\nEx().check_function(\\\\"pandas.DataFrame.hist\\\\").check_args(0).has_equal_value()\\\\nEx().check_function(\\\\"matplotlib.pyplot.show\\\\")\\\\n\\\\nsuccess_msg(\\\\"Awesome!\\\\")\\\\n","instructions","<ul>\\\\n<li>Assign the URL of the file to the variable <code>url</code>.</li>\\\\n<li>Read file into a DataFrame <code>df</code> using <code>pd.read_csv()</code>, recalling that the separator in the file is <code>';'</code>.</li>\\\\n<li>Print the head of the DataFrame <code>df</code>.</li>\\\\n<li>Execute the rest of the code to plot histogram of the first feature in the DataFrame <code>df</code>.</li>\\\\n</ul>","externalId",42708,"question","","hint","<ul>\\\\n<li>Make sure you typed the URL correctly!</li>\\\\n<li>Pass the <em>url</em> (the <code>url</code> object you defined) as the first argument and the <em>separator</em> as the second argument to <code>pd.read_csv()</code>.</li>\\\\n<li>The <em>head</em> of a DataFrame can be accessed by using <code>head()</code> on the DataFrame.</li>\\\\n<li>You don't have to change any of the code for plotting the histograms.</li>\\\\n</ul>","possible_answers",["^7",[]],"number",3,"randomNumber",0.10077051694782435,"assignment","<p>You have just imported a file from the web, saved it locally and loaded it into a DataFrame. If you just wanted to load a file from the web into a DataFrame without first saving it locally, you can do that easily using <code>pandas</code>. In particular, you can use the function <code>pd.read_csv()</code> with the URL as the first argument and the separator <code>sep</code> as the second argument.</p>\\\\n<p>The URL of the file, once again, is</p>\\\\n<pre><code>'https://s3.amazonaws.com/assets.datacamp.com/production/course_1606/datasets/winequality-red.csv'\\\\n</code></pre>","feedbacks",["^7",[]],"attachments",null,"title","Opening and reading flat files from the web","xp",100,"language","python","pre_exercise_code","","solution","# Import packages\\\\nimport matplotlib.pyplot as plt\\\\nimport pandas as pd\\\\n\\\\n# Assign url of file: url\\\\nurl = 'https://s3.amazonaws.com/assets.datacamp.com/production/course_1606/datasets/winequality-red.csv'\\\\n\\\\n# Read file into a DataFrame: df\\\\ndf = pd.read_csv(url, sep=';')\\\\n\\\\n# Print the head of the DataFrame\\\\nprint(df.head())\\\\n\\\\n# Plot first column of df\\\\npd.DataFrame.hist(df.ix[:, 0:1])\\\\nplt.xlabel('fixed acidity (g(tartaric acid)/dm$^3$)')\\\\nplt.ylabel('count')\\\\nplt.show()\\\\n","type","NormalExercise","id",42708]],["^2",["sample_code","# Import package\\\\nimport pandas as pd\\\\n\\\\n# Assign url of file: url\\\\n\\\\n\\\\n# Read in all sheets of Excel file: xls\\\\n\\\\n\\\\n# Print the sheetnames to the shell\\\\n\\\\n\\\\n# Print the head of the first sheet (using its name, NOT its index)\\\\n\\\\n","sct","Ex().has_import('pandas')\\\\nEx().check_correct(\\\\n has_printout(0),\\\\n multi(\\\\n check_correct(\\\\n check_object('xls').is_instance(dict),\\\\n check_correct(\\\\n check_function('pandas.read_excel').multi(\\\\n check_args(0).has_equal_value(),\\\\n check_args('sheet_name').has_equal_value()\\\\n ),\\\\n check_object('url').has_equal_value()\\\\n )\\\\n )\\\\n )\\\\n)\\\\nEx().has_printout(1)\\\\nsuccess_msg(\\\\"Awesome!\\\\")","instructions","<ul>\\\\n<li>Assign the URL of the file to the variable <code>url</code>.</li>\\\\n<li>Read the file in <code>url</code> into a dictionary <code>xls</code> using <code>pd.read_excel()</code> recalling that, in order to import all sheets you need to pass <code>None</code> to the argument <code>sheet_name</code>.</li>\\\\n<li>Print the names of the sheets in the Excel spreadsheet; these will be the keys of the dictionary <code>xls</code>.</li>\\\\n<li>Print the head of the first sheet <em>using the sheet name, not the index of the sheet</em>! The sheet name is <code>'1700'</code></li>\\\\n</ul>","externalId",42709,"question","","hint","<ul>\\\\n<li>Make sure you typed in the URL correctly!</li>\\\\n<li>Pass the <em>url</em> (the <code>url</code> object you defined) as the first argument and <code>sheet_name</code> with its corresponding value as the second argument to <code>pd.read_excel()</code>.</li>\\\\n<li>The <em>keys</em> of a dictionary can be accessed by using <code>keys()</code> on the dictionary.</li>\\\\n<li>You can access a sheet using the format: <em>dictionary</em><strong>[</strong><em>sheet name or index</em><strong>]</strong>.</li>\\\\n</ul>","possible_answers",["^7",[]],"number",4,"randomNumber",0.7419977198305243,"assignment","<p>Congrats! You've just loaded a flat file from the web into a DataFrame without first saving it locally using the <code>pandas</code> function <code>pd.read_csv()</code>. This function is super cool because it has close relatives that allow you to load all types of files, not only flat ones. In this interactive exercise, you'll use <code>pd.read_excel()</code> to import an Excel spreadsheet.</p>\\\\n<p>The URL of the spreadsheet is</p>\\\\n<pre><code>'http://s3.amazonaws.com/assets.datacamp.com/course/importing_data_into_r/latitude.xls'\\\\n</code></pre>\\\\n<p>Your job is to use <code>pd.read_excel()</code> to read in all of its sheets, print the sheet names and then print the head of the first sheet <em>using its name, not its index</em>.</p>\\\\n<p>Note that the output of <code>pd.read_excel()</code> is a Python dictionary with sheet names as keys and corresponding DataFrames as corresponding values.</p>","feedbacks",["^7",[]],"attachments",null,"title","Importing non-flat files from the web","xp",100,"language","python","pre_exercise_code","","solution","# Import package\\\\nimport pandas as pd\\\\n\\\\n# Assign url of file: url\\\\nurl = 'http://s3.amazonaws.com/assets.datacamp.com/course/importing_data_into_r/latitude.xls'\\\\n\\\\n# Read in all sheets of Excel file: xls\\\\nxls = pd.read_excel(url, sheet_name=None)\\\\n\\\\n# Print the sheetnames to the shell\\\\nprint(xls.keys())\\\\n\\\\n# Print the head of the first sheet (using its name, NOT its index)\\\\nprint(xls['1700'].head())","type","NormalExercise","id",42709]],["^2",["sample_code","","sct","","aspect_ratio",56.25,"instructions",null,"externalId",990669,"question","","hint",null,"possible_answers",["^7",[]],"runtime_config",null,"number",5,"video_hls",null,"randomNumber",0.9433112374621455,"chapter_id",4135,"assignment",null,"feedbacks",["^7",[]],"attachments",null,"version","v0","title","HTTP requests to import files from the web","xp",50,"language","python","pre_exercise_code","","solution","","type","VideoExercise","id",990669,"projector_key","course_1606_9d15ae176be1800b996f7869a82b8087","video_link",null,"key","e480d1fdcf","course_id",1606]],["^2",["sample_code","# Import packages\\\\n\\\\n\\\\n# Specify the url\\\\nurl = \\\\"https://campus.datacamp.com/courses/1606/4135?ex=2\\\\"\\\\n\\\\n# This packages the request: request\\\\n\\\\n\\\\n# Sends the request and catches the response: response\\\\n\\\\n\\\\n# Print the datatype of response\\\\nprint(type(response))\\\\n\\\\n# Be polite and close the response!\\\\nresponse.close()\\\\n","sct","\\\\n# Test: import urlopen, Request\\\\nimport_msg = \\\\"Did you correctly import the required packages?\\\\"\\\\nEx().has_import(\\\\n \\\\"urllib.request.urlopen\\\\",\\\\n not_imported_msg=import_msg\\\\n)\\\\nEx().has_import(\\\\n \\\\"urllib.request.Request\\\\",\\\\n not_imported_msg=import_msg\\\\n)\\\\n\\\\n# Test: Predefined code\\\\npredef_msg = \\\\"You don't have to change any of the predefined code.\\\\"\\\\nEx().check_object(\\\\"url\\\\", missing_msg=predef_msg).has_equal_value(incorrect_msg = predef_msg)\\\\n\\\\n# Test: call to Request() and 'request' variable\\\\nEx().check_function(\\\\"urllib.request.Request\\\\").check_args(0).has_equal_value()\\\\nEx().check_object(\\\\"request\\\\")\\\\n \\\\n# Test: call to urlopen() and 'response' variable\\\\nEx().check_function(\\\\"urllib.request.urlopen\\\\").check_args(0).has_equal_ast()\\\\nEx().check_object(\\\\"response\\\\"),\\\\n\\\\n# Test: Predefined code\\\\nEx().has_printout(0)\\\\nEx().check_function(\\\\"response.close\\\\")\\\\n\\\\nsuccess_msg(\\\\"Awesome!\\\\")\\\\n","instructions","<ul>\\\\n<li>Import the functions <code>urlopen</code> and <code>Request</code> from the subpackage <code>urllib.request</code>.</li>\\\\n<li>Package the request to the url <code>\\\\"https://campus.datacamp.com/courses/1606/4135?ex=2\\\\"</code> using the function <code>Request()</code> and assign it to <code>request</code>.</li>\\\\n<li>Send the request and catch the response in the variable <code>response</code> with the function <code>urlopen()</code>.</li>\\\\n<li>Run the rest of the code to see the datatype of <code>response</code> and to close the connection!</li>\\\\n</ul>","externalId",42711,"question","","hint","<ul>\\\\n<li>To import two functions in one line, import the first function as usual and add a comma <code>,</code> followed by the second function.</li>\\\\n<li>Pass the <em>url</em> (already in the <code>url</code> object defined) as an argument to <code>Request()</code>.</li>\\\\n<li>Pass <code>request</code> as an argument to <code>urlopen()</code>.</li>\\\\n<li>You don't have to modify the code for printing the datatype of <code>response</code> and closing the connection.</li>\\\\n</ul>","possible_answers",["^7",[]],"number",6,"randomNumber",0.025326719030948963,"assignment","<p>Now that you know the basics behind HTTP GET requests, it's time to perform some of your own. In this interactive exercise, you will ping our very own DataCamp servers to perform a GET request to extract information from the first coding exercise of this course, <code>\\\\"https://campus.datacamp.com/courses/1606/4135?ex=2\\\\"</code>.</p>\\\\n<p>In the next exercise, you'll extract the HTML itself. Right now, however, you are going to package and send the request and then catch the response.</p>","feedbacks",["^7",[]],"attachments",null,"title","Performing HTTP requests in Python using urllib","xp",100,"language","python","pre_exercise_code","","solution","# Import packages\\\\nfrom urllib.request import urlopen, Request\\\\n\\\\n# Specify the url\\\\nurl = \\\\"https://campus.datacamp.com/courses/1606/4135?ex=2\\\\"\\\\n\\\\n# This packages the request: request\\\\nrequest = Request(url)\\\\n\\\\n# Sends the request and catches the response: response\\\\nresponse = urlopen(request)\\\\n\\\\n# Print the datatype of response\\\\nprint(type(response))\\\\n\\\\n# Be polite and close the response!\\\\nresponse.close()\\\\n","type","NormalExercise","id",42711]],["^2",["sample_code","# Import packages\\\\nfrom urllib.request import urlopen, Request\\\\n\\\\n# Specify the url\\\\nurl = \\\\"https://campus.datacamp.com/courses/1606/4135?ex=2\\\\"\\\\n\\\\n# This packages the request\\\\nrequest = Request(url)\\\\n\\\\n# Sends the request and catches the response: response\\\\n\\\\n\\\\n# Extract the response: html\\\\n\\\\n\\\\n# Print the html\\\\n\\\\n\\\\n# Be polite and close the response!\\\\nresponse.close()","sct","\\\\n# Test: Predefined code\\\\npredef_msg = \\\\"You don't have to change any of the predefined code.\\\\"\\\\nEx().has_import(\\\\n \\\\"urllib.request.urlopen\\\\",\\\\n not_imported_msg=predef_msg\\\\n)\\\\n\\\\nEx().has_import(\\\\n \\\\"urllib.request.Request\\\\",\\\\n not_imported_msg=predef_msg\\\\n)\\\\n\\\\nEx().check_object(\\\\"url\\\\").has_equal_value()\\\\n\\\\n# Test: call to Request() and 'request' variable\\\\nEx().check_function(\\\\"urllib.request.Request\\\\").check_args(0).has_equal_value()\\\\nEx().check_object(\\\\"request\\\\")\\\\n\\\\n# Test: call to urlopen() and 'response' variable\\\\nEx().check_function(\\\\"urllib.request.urlopen\\\\").check_args(0).has_equal_ast()\\\\nEx().check_object(\\\\"response\\\\")\\\\n\\\\n# Test: call to urlopen() and 'response' variable\\\\nEx().check_function(\\\\"response.read\\\\")\\\\nEx().check_object(\\\\"html\\\\")\\\\n\\\\n# Test: call to print()\\\\nEx().check_function('print').check_args(0).has_equal_ast()\\\\n\\\\n# Test: Predefined code\\\\nEx().check_function(\\\\"response.close\\\\")\\\\n\\\\nsuccess_msg(\\\\"Awesome!\\\\")\\\\n","instructions","<ul>\\\\n<li>Send the request and catch the response in the variable <code>response</code> with the function <code>urlopen()</code>, as in the previous exercise.</li>\\\\n<li>Extract the response using the <code>read()</code> method and store the result in the variable <code>html</code>.</li>\\\\n<li>Print the string <code>html</code>.</li>\\\\n<li>Hit submit to perform all of the above and to close the response: be tidy!</li>\\\\n</ul>","externalId",42712,"question","","hint","<ul>\\\\n<li>Pass <code>request</code> as an argument to <code>urlopen()</code>.</li>\\\\n<li>Apply the method <code>read()</code> to the response object <code>response</code>.</li>\\\\n<li>Simply pass <code>html</code> to the <code>print()</code> function.</li>\\\\n<li>You don't have to modify the code for closing the response.</li>\\\\n</ul>","possible_answers",["^7",[]],"number",7,"randomNumber",0.4368582772187055,"assignment","<p>You have just packaged and sent a GET request to <code>\\\\"https://campus.datacamp.com/courses/1606/4135?ex=2\\\\"</code> and then caught the response. You saw that such a response is a <code>http.client.HTTPResponse</code> object. The question remains: what can you do with this response?</p>\\\\n<p>Well, as it came from an HTML page, you could <em>read</em> it to extract the HTML and, in fact, such a <code>http.client.HTTPResponse</code> object has an associated <code>read()</code> method. In this exercise, you'll build on your previous great work to extract the response and print the HTML.</p>","feedbacks",["^7",[]],"attachments",null,"title","Printing HTTP request results in Python using urllib","xp",100,"language","python","pre_exercise_code","","solution","# Import packages\\\\nfrom urllib.request import urlopen, Request\\\\n\\\\n# Specify the url\\\\nurl = \\\\"https://campus.datacamp.com/courses/1606/4135?ex=2\\\\"\\\\n\\\\n# This packages the request\\\\nrequest = Request(url)\\\\n\\\\n# Sends the request and catches the response: response\\\\nresponse = urlopen(request)\\\\n\\\\n# Extract the response: html\\\\nhtml = response.read()\\\\n\\\\n# Print the html\\\\nprint(html)\\\\n\\\\n# Be polite and close the response!\\\\nresponse.close()","type","NormalExercise","id",42712]],["^2",["sample_code","# Import package\\\\n\\\\n\\\\n# Specify the url: url\\\\n\\\\n\\\\n# Packages the request, send the request and catch the response: r\\\\n\\\\n\\\\n# Extract the response: text\\\\n\\\\n\\\\n# Print the html\\\\nprint(text)","sct","\\\\n# Test: import requests\\\\nEx().has_import(\\\\"requests\\\\")\\\\n\\\\n# Test: 'url' variable\\\\nEx().check_object(\\\\"url\\\\").has_equal_value()\\\\n\\\\n# Test: call to requests.get() and 'r' variable\\\\nEx().check_function(\\\\"requests.get\\\\").check_args(0).has_equal_value()\\\\nEx().check_object(\\\\"r\\\\")\\\\n\\\\n# Test: 'text' variable\\\\nEx().has_code(\\\\"r.text\\\\", pattern = False, not_typed_msg=\\\\"Have you used `r.text` to create `text`?\\\\")\\\\nEx().check_object(\\\\"text\\\\")\\\\n\\\\n# Test: Predefined code\\\\nEx().check_function('print').check_args(0).has_equal_ast()\\\\n\\\\nsuccess_msg(\\\\"Awesome!\\\\")\\\\n","instructions","<ul>\\\\n<li>Import the package <code>requests</code>.</li>\\\\n<li>Assign the URL of interest to the variable <code>url</code>.</li>\\\\n<li>Package the request to the URL, send the request and catch the response with a single function <code>requests.get()</code>, assigning the response to the variable <code>r</code>.</li>\\\\n<li>Use the <code>text</code> attribute of the object <code>r</code> to return the HTML of the webpage as a string; store the result in a variable <code>text</code>.</li>\\\\n<li>Hit submit to print the HTML of the webpage.</li>\\\\n</ul>","externalId",42713,"question","","hint","<ul>\\\\n<li>To import a package <code>x</code>, execute <code>import x</code>.</li>\\\\n<li>Did you type in the URL correctly?</li>\\\\n<li>Pass the <em>url</em> (the <code>url</code> object you defined) as an argument to <code>requests.get()</code>.</li>\\\\n<li>You can access the <code>text</code> attribute of the object <code>r</code> by executing <code>r.text</code>.</li>\\\\n<li>You don't have to modify the code for printing the HTML of the webpage.</li>\\\\n</ul>","possible_answers",["^7",[]],"number",8,"randomNumber",0.3464698092388834,"assignment","<p>Now that you've got your head and hands around making HTTP requests using the urllib package, you're going to figure out how to do the same using the higher-level requests library. You'll once again be pinging DataCamp servers for their <code>\\\\"http://www.datacamp.com/teach/documentation\\\\"</code> page.</p>\\\\n<p>Note that unlike in the previous exercises using urllib, you don't have to close the connection when using requests!</p>","feedbacks",["^7",[]],"attachments",null,"title","Performing HTTP requests in Python using requests","xp",100,"language","python","pre_exercise_code","","solution","# Import package\\\\nimport requests\\\\n\\\\n# Specify the url: url\\\\nurl = \\\\"http://www.datacamp.com/teach/documentation\\\\"\\\\n\\\\n# Packages the request, send the request and catch the response: r\\\\nr = requests.get(url)\\\\n\\\\n# Extract the response: text\\\\ntext = r.text\\\\n\\\\n# Print the html\\\\nprint(text)","type","NormalExercise","id",42713]],["^2",["sample_code","","sct","","aspect_ratio",56.25,"instructions",null,"externalId",990670,"question","","hint",null,"possible_answers",["^7",[]],"runtime_config",null,"number",9,"video_hls",null,"randomNumber",0.8666582036246655,"chapter_id",4135,"assignment",null,"feedbacks",["^7",[]],"attachments",null,"version","v0","title","Scraping the web in Python","xp",50,"language","python","pre_exercise_code","","solution","","type","VideoExercise","id",990670,"projector_key","course_1606_9d1f8a331d1200c7e1bdbfcaf3a7a491","video_link",null,"key","da43858012","course_id",1606]],["^2",["sample_code","# Import packages\\\\nimport requests\\\\nfrom ____ import ____\\\\n\\\\n# Specify url: url\\\\n\\\\n\\\\n# Package the request, send the request and catch the response: r\\\\n\\\\n\\\\n# Extracts the response as html: html_doc\\\\n\\\\n\\\\n# Create a BeautifulSoup object from the HTML: soup\\\\n\\\\n\\\\n# Prettify the BeautifulSoup object: pretty_soup\\\\n\\\\n\\\\n# Print the response\\\\nprint(pretty_soup)","sct","# Test: Predefined code\\\\npredef_msg = \\\\"You don't have to change any of the predefined code.\\\\"\\\\nEx().has_import(\\\\n \\\\"requests\\\\",\\\\n not_imported_msg=predef_msg\\\\n)\\\\n\\\\n# Test: import BeautifulSoup\\\\nimport_msg = \\\\"Did you correctly import the required packages?\\\\"\\\\nEx().has_import(\\\\n \\\\"bs4.BeautifulSoup\\\\",\\\\n not_imported_msg=import_msg\\\\n)\\\\n\\\\n# Test: 'url' variable\\\\nEx().check_object(\\\\"url\\\\").has_equal_value()\\\\n\\\\n# Test: call to requests.get() and 'r' variable\\\\nEx().check_function(\\\\"requests.get\\\\").check_args(0).has_equal_value()\\\\nEx().check_object(\\\\"r\\\\")\\\\n\\\\n\\\\n# Test: 'html_doc' variable\\\\nEx().check_correct(\\\\n check_object(\\\\"html_doc\\\\").has_equal_value(),\\\\n has_code(\\\\"r.text\\\\", pattern = False, not_typed_msg=\\\\"Have you used `r.text` to create `html_doc`?\\\\")\\\\n)\\\\n\\\\n# Test: call to BeautifulSoup() and 'soup' variable\\\\nEx().check_correct(\\\\n check_object(\\\\"soup\\\\").has_equal_value(),\\\\n check_function(\\\\"bs4.BeautifulSoup\\\\").check_args(0).has_equal_value()\\\\n )\\\\n\\\\n# Test: call to prettify() and 'pretty_soup' variable\\\\nEx().check_correct(\\\\n check_object(\\\\"pretty_soup\\\\").has_equal_value(),\\\\n check_function(\\\\"soup.prettify\\\\")\\\\n )\\\\n\\\\n# Test: Predefined code\\\\nEx().has_printout(0)\\\\n\\\\nsuccess_msg(\\\\"Awesome!\\\\")\\\\n","instructions","<ul>\\\\n<li>Import the function <code>BeautifulSoup</code> from the package <code>bs4</code>.</li>\\\\n<li>Assign the URL of interest to the variable <code>url</code>.</li>\\\\n<li>Package the request to the URL, send the request and catch the response with a single function <code>requests.get()</code>, assigning the response to the variable <code>r</code>.</li>\\\\n<li>Use the <code>text</code> attribute of the object <code>r</code> to return the HTML of the webpage as a string; store the result in a variable <code>html_doc</code>.</li>\\\\n<li>Create a BeautifulSoup object <code>soup</code> from the resulting HTML using the function <code>BeautifulSoup()</code>.</li>\\\\n<li>Use the method <code>prettify()</code> on <code>soup</code> and assign the result to <code>pretty_soup</code>.</li>\\\\n<li>Hit submit to print to prettified HTML to your shell!</li>\\\\n</ul>","externalId",42715,"question","","hint","<ul>\\\\n<li>To import a function <code>y</code> from a package <code>x</code>, execute <code>from x import y</code>.</li>\\\\n<li>Check the URL to make sure that you typed it in correctly.</li>\\\\n<li>Pass the <em>url</em> (the <code>url</code> object you defined) as an argument to <code>requests.get()</code>.</li>\\\\n<li>You can access the <code>text</code> attribute of the object <code>r</code> by executing <code>r.text</code>.</li>\\\\n<li>Pass the extracted <em>HTML</em> as an argument to <code>BeautifulSoup()</code>.</li>\\\\n<li>To use the <code>prettify()</code> method on the BeautifulSoup object <code>soup</code>, execute <code>soup.prettify()</code>.</li>\\\\n<li>You don't have to modify the code to print the prettified HTML.</li>\\\\n</ul>","possible_answers",["^7",[]],"number",10,"randomNumber",0.2142961690812859,"assignment","<p>In this interactive exercise, you'll learn how to use the BeautifulSoup package to <em>parse</em>, <em>prettify</em> and <em>extract</em> information from HTML. You'll scrape the data from the webpage of Guido van Rossum, Python's very own <a href=\\\\"https://en.wikipedia.org/wiki/Benevolent_dictator_for_life\\\\">Benevolent Dictator for Life</a>. In the following exercises, you'll prettify the HTML and then extract the text and the hyperlinks.</p>\\\\n<p>The URL of interest is <code>url = 'https://www.python.org/~guido/'</code>.</p>","feedbacks",["^7",[]],"attachments",null,"title","Parsing HTML with BeautifulSoup","xp",100,"language","python","pre_exercise_code","","solution","# Import packages\\\\nimport requests\\\\nfrom bs4 import BeautifulSoup\\\\n\\\\n# Specify url: url\\\\nurl = 'https://www.python.org/~guido/'\\\\n\\\\n# Package the request, send the request and catch the response: r\\\\nr = requests.get(url)\\\\n\\\\n# Extracts the response as html: html_doc\\\\nhtml_doc = r.text\\\\n\\\\n# Create a BeautifulSoup object from the HTML: soup\\\\nsoup = BeautifulSoup(html_doc)\\\\n\\\\n# Prettify the BeautifulSoup object: pretty_soup\\\\npretty_soup = soup.prettify()\\\\n\\\\n# Print the response\\\\nprint(pretty_soup)","type","NormalExercise","id",42715]],["^2",["sample_code","# Import packages\\\\nimport requests\\\\nfrom bs4 import BeautifulSoup\\\\n\\\\n# Specify url: url\\\\nurl = 'https://www.python.org/~guido/'\\\\n\\\\n# Package the request, send the request and catch the response: r\\\\nr = requests.get(url)\\\\n\\\\n# Extract the response as html: html_doc\\\\nhtml_doc = r.text\\\\n\\\\n# Create a BeautifulSoup object from the HTML: soup\\\\n\\\\n\\\\n# Get the title of Guido's webpage: guido_title\\\\n\\\\n\\\\n# Print the title of Guido's webpage to the shell\\\\n\\\\n\\\\n# Get Guido's text: guido_text\\\\n\\\\n\\\\n# Print Guido's text to the shell\\\\nprint(guido_text)","sct","# Test: Predefined code\\\\npredef_msg = \\\\"You don't have to change any of the predefined code.\\\\"\\\\nEx().has_import(\\\\n \\\\"requests\\\\",\\\\n not_imported_msg=predef_msg\\\\n)\\\\n\\\\n# Test: import BeautifulSoup\\\\nEx().has_import(\\\\n \\\\"bs4.BeautifulSoup\\\\",\\\\n not_imported_msg=predef_msg\\\\n)\\\\n\\\\n# Test: 'url' variable\\\\nEx().check_object(\\\\"url\\\\").has_equal_value()\\\\n\\\\n# Test: call to requests.get() and 'r' variable\\\\nEx().check_function(\\\\"requests.get\\\\").check_args(0).has_equal_value()\\\\nEx().check_object(\\\\"r\\\\")\\\\n\\\\n\\\\n# Test: 'html_doc' variable\\\\nEx().check_correct(\\\\n check_object(\\\\"html_doc\\\\").has_equal_value(),\\\\n has_code(\\\\"r.text\\\\", pattern = False, not_typed_msg=\\\\"Have you used `r.text` to create `html_doc`?\\\\")\\\\n)\\\\n\\\\n# Test: call to BeautifulSoup() and 'soup' variable\\\\nEx().check_correct(\\\\n check_object(\\\\"soup\\\\").has_equal_value(),\\\\n check_function(\\\\"bs4.BeautifulSoup\\\\").check_args(0).has_equal_value()\\\\n )\\\\n\\\\n# Test: 'guido_title' variable\\\\nEx().check_correct(\\\\n check_object(\\\\"guido_title\\\\").has_equal_value(),\\\\n has_code(\\\\"soup.title\\\\", pattern = False, not_typed_msg=\\\\"Have you used `soup.title` to create `guido_title`?\\\\")\\\\n)\\\\n\\\\n# Test: call to print()\\\\nEx().has_printout(0)\\\\n\\\\n# Test: call to soup.get_text() and 'guido_text' variable\\\\nEx().check_correct(\\\\n check_object(\\\\"guido_text\\\\").has_equal_value(),\\\\n check_function(\\\\"soup.get_text\\\\")\\\\n )\\\\n\\\\n# Test: Predefined code\\\\nEx().has_printout(1)\\\\n\\\\nsuccess_msg(\\\\"Awesome!\\\\")\\\\n","instructions","<ul>\\\\n<li>In the sample code, the HTML response object <code>html_doc</code> has already been created: your first task is to Soupify it using the function <code>BeautifulSoup()</code> and to assign the resulting soup to the variable <code>soup</code>.</li>\\\\n<li>Extract the title from the HTML soup <code>soup</code> using the attribute <code>title</code> and assign the result to <code>guido_title</code>.</li>\\\\n<li>Print the title of Guido's webpage to the shell using the <code>print()</code> function.</li>\\\\n<li>Extract the text from the HTML soup <code>soup</code> using the method <code>get_text()</code> and assign to <code>guido_text</code>.</li>\\\\n<li>Hit submit to print the text from Guido's webpage to the shell.</li>\\\\n</ul>","externalId",42716,"question","","hint","<ul>\\\\n<li>Pass the <em>HTML response object</em> as an argument to <code>BeautifulSoup()</code>.</li>\\\\n<li>You can access the <code>title</code> attribute of the object <code>soup</code> by executing <code>soup.title</code>.</li>\\\\n<li>The object that contains the title of Guido's webpage is <code>guido_title</code>; pass this as an argument to <code>print()</code>.</li>\\\\n<li>Use the method <code>get_text()</code> on the HTML soup <code>soup</code> by executing <code>soup.get_text()</code>.</li>\\\\n<li>You don't have to modify the code to print the text from Guido's webpage.</li>\\\\n</ul>","possible_answers",["^7",[]],"number",11,"randomNumber",0.4857854755758062,"assignment","<p>As promised, in the following exercises, you'll learn the basics of extracting information from HTML soup. In this exercise, you'll figure out how to extract the text from the BDFL's webpage, along with printing the webpage's title.</p>","feedbacks",["^7",[]],"attachments",null,"title","Turning a webpage into data using BeautifulSoup: getting the text","xp",100,"language","python","pre_exercise_code","","solution","# Import packages\\\\nimport requests\\\\nfrom bs4 import BeautifulSoup\\\\n\\\\n# Specify url: url\\\\nurl = 'https://www.python.org/~guido/'\\\\n\\\\n# Package the request, send the request and catch the response: r\\\\nr = requests.get(url)\\\\n\\\\n# Extract the response as html: html_doc\\\\nhtml_doc = r.text\\\\n\\\\n# Create a BeautifulSoup object from the HTML: soup\\\\nsoup = BeautifulSoup(html_doc)\\\\n\\\\n# Get the title of Guido's webpage: guido_title\\\\nguido_title = soup.title\\\\n\\\\n# Print the title of Guido's webpage to the shell\\\\nprint(guido_title)\\\\n\\\\n# Get Guido's text: guido_text\\\\nguido_text = soup.get_text()\\\\n\\\\n# Print Guido's text to the shell\\\\nprint(guido_text)","type","NormalExercise","id",42716]],["^2",["sample_code","# Import packages\\\\nimport requests\\\\nfrom bs4 import BeautifulSoup\\\\n\\\\n# Specify url\\\\nurl = 'https://www.python.org/~guido/'\\\\n\\\\n# Package the request, send the request and catch the response: r\\\\nr = requests.get(url)\\\\n\\\\n# Extracts the response as html: html_doc\\\\nhtml_doc = r.text\\\\n\\\\n# create a BeautifulSoup object from the HTML: soup\\\\nsoup = BeautifulSoup(html_doc)\\\\n\\\\n# Print the title of Guido's webpage\\\\nprint(soup.title)\\\\n\\\\n# Find all 'a' tags (which define hyperlinks): a_tags\\\\n\\\\n\\\\n# Print the URLs to the shell\\\\nfor ____ in ____:\\\\n ____","sct","predef_msg = \\\\"You don't have to change any of the predefined code.\\\\"\\\\nEx().has_import(\\\\"requests\\\\")\\\\nEx().has_import(\\\\"bs4.BeautifulSoup\\\\")\\\\nEx().check_object(\\\\"url\\\\").has_equal_value(incorrect_msg = predef_msg)\\\\nEx().check_function(\\\\"requests.get\\\\").check_args(0).has_equal_ast()\\\\nEx().check_object(\\\\"html_doc\\\\").has_equal_value(incorrect_msg = predef_msg)\\\\nEx().check_object(\\\\"soup\\\\").has_equal_value(incorrect_msg = predef_msg)\\\\nEx().has_printout(0)\\\\n\\\\nEx().check_correct(\\\\n check_object(\\\\"a_tags\\\\"),\\\\n check_function(\\\\"soup.find_all\\\\").check_args(0).has_equal_value()\\\\n)\\\\nEx().check_for_loop().multi(\\\\n check_iter().has_equal_value(incorrect_msg = \\\\"You have to iterate over `a_tags`\\\\"),\\\\n check_body().set_context('<a href=\\\\"pics.html\\\\"><img border=\\\\"0\\\\" src=\\\\"images/IMG_2192.jpg\\\\"/></a>').check_function(\\\\"print\\\\").check_args(0).check_function(\\\\"link.get\\\\").check_args(0).has_equal_value()\\\\n )\\\\n\\\\nsuccess_msg(\\\\"Awesome!\\\\")","instructions","<ul>\\\\n<li>Use the method <code>find_all()</code> to find all hyperlinks in <code>soup</code>, remembering that hyperlinks are defined by the HTML tag <code>&lt;a&gt;</code> but passed to <code>find_all()</code> without angle brackets; store the result in the variable <code>a_tags</code>.</li>\\\\n<li>The variable <code>a_tags</code> is a results set: your job now is to enumerate over it, using a <code>for</code> loop and to print the actual URLs of the hyperlinks; to do this, for every element <code>link</code> in <code>a_tags</code>, you want to <code>print()</code> <code>link.get('href')</code>.</li>\\\\n</ul>","externalId",42717,"question","","hint","<ul>\\\\n<li>Pass the <em>HTML tag</em> to find (without the angle brackets <code>&lt;&gt;</code>) as a string argument to <code>find_all()</code>.</li>\\\\n<li>Recall that the <code>for</code> loop recipe is: <code>for</code> <em>loop variable</em> <code>in</code> <em>results set</em><code>:</code>. Don't forget to pass <code>link.get('href')</code> as an argument to <code>print()</code> inside the <code>for</code> loop body.</li>\\\\n</ul>","possible_answers",["^7",[]],"number",12,"randomNumber",0.8156043842203349,"assignment","<p>In this exercise, you'll figure out how to extract the URLs of the hyperlinks from the BDFL's webpage. In the process, you'll become close friends with the soup method <code>find_all()</code>.</p>","feedbacks",["^7",[]],"attachments",null,"title","Turning a webpage into data using BeautifulSoup: getting the hyperlinks","xp",100,"language","python","pre_exercise_code","","solution","# Import packages\\\\nimport requests\\\\nfrom bs4 import BeautifulSoup\\\\n\\\\n# Specify url\\\\nurl = 'https://www.python.org/~guido/'\\\\n\\\\n# Package the request, send the request and catch the response: r\\\\nr = requests.get(url)\\\\n\\\\n# Extracts the response as html: html_doc\\\\nhtml_doc = r.text\\\\n\\\\n# create a BeautifulSoup object from the HTML: soup\\\\nsoup = BeautifulSoup(html_doc)\\\\n\\\\n# Print the title of Guido's webpage\\\\nprint(soup.title)\\\\n\\\\n# Find all 'a' tags (which define hyperlinks): a_tags\\\\na_tags = soup.find_all('a')\\\\n\\\\n# Print the URLs to the shell\\\\nfor link in a_tags:\\\\n print(link.get('href'))","type","NormalExercise","id",42717]]]],"canRateChapter",false,"isChapterCompleted",false]],"location",["^2",["current",["^2",["pathname","/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1","query",["^2",["ex","2"]]]],"canonical","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=2","before",["^2",["pathname","/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1","query",["^2",["ex","2"]]]]]],"mobilePopup",["^2",[]],"onboardingMilestones",["^ ","isStarted",false,"isActive",true,"step",0],"preFetchedData",["^0",["^ ","n","PreFetchedDataStateRecord","v",["^ ","^9",["^0",["^ ","n","PreFetchedRequestRecord","v",["^ ","status","SUCCESS","data",["^ ","id",1606,"title","Intermediate Importing Data in Python","description","As a data scientist, you will need to clean data, wrangle and munge it, visualize it, build predictive models and interpret these models. Before you can do so, however, you will need to know how to get data into Python. In the prequel to this course, you learned many ways to import data into Python: from flat files such as .txt and .csv; from files native to other software such as Excel spreadsheets, Stata, SAS, and MATLAB files; and from relational databases such as SQLite and PostgreSQL. In this course, you'll extend this knowledge base by learning to import data from the web and by pulling data from Application Programming Interfaces\xe2\x80\x94 APIs\xe2\x80\x94such as the Twitter streaming API, which allows us to stream real-time tweets.","short_description","Improve your Python data importing skills and learn to work with web and API data.","author_field",null,"author_bio",null,"author_image","https://assets.datacamp.com/production/course_1606/author_images/author_image_course_1606_20200310-1-lgdj4c?1583853939","nb_of_subscriptions",111907,"slug","intermediate-importing-data-in-python","image_url","https://assets.datacamp.com/production/course_1606/shields/thumb/shield_image_course_1606_20200310-1-17hkmhz?1583853940","image_thumbnail_url","https://assets.datacamp.com/production/course_1606/shields/thumb_home/shield_image_course_1606_20200310-1-17hkmhz?1583853940","last_updated_on","06/04/2021","link","https://www.datacamp.com/courses/intermediate-importing-data-in-python","should_cache",true,"type","datacamp","difficulty_level",1,"state","live","university",null,"sharing_links",["^ ","twitter","http://bit.ly/1eWTMJh","facebook","http://bit.ly/1iS42Do"],"marketing_video","","programming_language","python","paid",true,"time_needed",null,"xp",2400,"topic_id",8,"technology_id",2,"reduced_outline",null,"runtime_config",null,"lti_only",false,"instructors",[["^ ","id",301837,"marketing_biography","Data Scientist at DataCamp","biography","Hugo is a data scientist, educator, writer and podcaster at DataCamp. His main interests are promoting data & AI literacy, helping to spread data skills through organizations and society and doing amateur stand up comedy in NYC. If you want to know what he likes to talk about, definitely check out DataFramed, the DataCamp podcast, which he hosts and produces: https://www.datacamp.com/community/podcast","avatar_url","https://assets.datacamp.com/users/avatars/000/301/837/square/hugoaboutpic.jpg?1493154678","full_name","Hugo Bowne-Anderson","instructor_path","/instructors/hugobowne"]],"collaborators",[["^ ","^18","https://assets.datacamp.com/users/avatars/000/382/294/square/francis-photo.jpg?1471980001","^19","Francisco Castro"]],"datasets",[["^ ","asset_url","https://assets.datacamp.com/production/repositories/488/datasets/b422ace2fceada7b569e0ba3e8d833fddc684c4d/latitude.xls","name","Latitudes (XLS)"],["^ ","^1=","https://assets.datacamp.com/production/repositories/488/datasets/3ef452f83a91556ea4284624b969392c0506fb33/tweets3.txt","^1>","Tweets"],["^ ","^1=","https://assets.datacamp.com/production/repositories/488/datasets/013936d2700e2d00207ec42100d448c23692eb6f/winequality-red.csv","^1>","Red wine quality"]],"tracks",[["^ ","path","/tracks/data-analyst-with-python","title_with_subtitle","Data Analyst with Python"],["^ ","^1@","/tracks/data-scientist-with-python","^1A","Data Scientist with Python"],["^ ","^1@","/tracks/personalized-data-scientist-with-python","^1A","Data Scientist with Python"],["^ ","^1@","/tracks/importing-cleaning-data-with-python","^1A","Importing & Cleaning Data with Python"]],"prerequisites",[["^ ","^1@","/courses/introduction-to-importing-data-in-python","^D","Introduction to Importing Data in Python"]],"time_needed_in_hours",2,"seo_title","Intermediate Importing Data in Python","seo_description","Further improve your Python data importing skills and learn to work with more web and API data.","archived_at",null,"original_image_url","https://assets.datacamp.com/production/course_1606/shields/original/shield_image_course_1606_20200310-1-17hkmhz?1583853940","external_slug","intermediate-importing-data-in-python","chapters",[["^ ","id",4135,"title_meta",null,"^D","Importing data from the Internet","^E","The web is a rich source of data from which you can extract various types of insights and findings. In this chapter, you will learn how to get data from the web, whether it is stored in files or in HTML. You'll also learn the basics of scraping and parsing web data.","number",1,"^K","importing-data-from-the-internet-1","nb_exercises",12,"badge_completed_url","https://assets.datacamp.com/production/default/badges/missing.png","badge_uncompleted_url","https://assets.datacamp.com/production/default/badges/missing_unc.png","^N","06/11/2020","slides_link","https://s3.amazonaws.com/assets.datacamp.com/production/course_1606/slides/chapter1.pdf","free_preview",true,"xp",1050,"number_of_videos",3,"^:",[["^ ","^Q","VideoExercise","^D","Importing flat files from the web","aggregate_xp",50,"^1K",1,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=1"],["^ ","^Q","NormalExercise","^D","Importing flat files from the web: your turn!","^1R",100,"^1K",2,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=2"],["^ ","^Q","NormalExercise","^D","Opening and reading flat files from the web","^1R",100,"^1K",3,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=3"],["^ ","^Q","NormalExercise","^D","Importing non-flat files from the web","^1R",100,"^1K",4,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=4"],["^ ","^Q","VideoExercise","^D","HTTP requests to import files from the web","^1R",50,"^1K",5,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=5"],["^ ","^Q","NormalExercise","^D","Performing HTTP requests in Python using urllib","^1R",100,"^1K",6,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=6"],["^ ","^Q","NormalExercise","^D","Printing HTTP request results in Python using urllib","^1R",100,"^1K",7,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=7"],["^ ","^Q","NormalExercise","^D","Performing HTTP requests in Python using requests","^1R",100,"^1K",8,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=8"],["^ ","^Q","VideoExercise","^D","Scraping the web in Python","^1R",50,"^1K",9,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=9"],["^ ","^Q","NormalExercise","^D","Parsing HTML with BeautifulSoup","^1R",100,"^1K",10,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=10"],["^ ","^Q","NormalExercise","^D","Turning a webpage into data using BeautifulSoup: getting the text","^1R",100,"^1K",11,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=11"],["^ ","^Q","NormalExercise","^D","Turning a webpage into data using BeautifulSoup: getting the hyperlinks","^1R",100,"^1K",12,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=12"]]],["^ ","id",4136,"^1J",null,"^D","Interacting with APIs to import data from the web","^E","In this chapter, you will gain a deeper understanding of how to import data from the web. You will learn the basics of extracting data from APIs, gain insight on the importance of APIs, and practice extracting data by diving into the OMDB and Library of Congress APIs.","^1K",2,"^K","interacting-with-apis-to-import-data-from-the-web-2","^1L",9,"^1M","https://assets.datacamp.com/production/default/badges/missing.png","^1N","https://assets.datacamp.com/production/default/badges/missing_unc.png","^N","06/11/2020","^1O","https://s3.amazonaws.com/assets.datacamp.com/production/course_1606/slides/chapter2.pdf","^1P",null,"xp",650,"^1Q",2,"^:",[["^ ","^Q","VideoExercise","^D","Introduction to APIs and JSONs","^1R",50,"^1K",1,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/interacting-with-apis-to-import-data-from-the-web-2?ex=1"],["^ ","^Q","PureMultipleChoiceExercise","^D","Pop quiz: What exactly is a JSON?","^1R",50,"^1K",2,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/interacting-with-apis-to-import-data-from-the-web-2?ex=2"],["^ ","^Q","NormalExercise","^D","Loading and exploring a JSON","^1R",100,"^1K",3,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/interacting-with-apis-to-import-data-from-the-web-2?ex=3"],["^ ","^Q","MultipleChoiceExercise","^D","Pop quiz: Exploring your JSON","^1R",50,"^1K",4,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/interacting-with-apis-to-import-data-from-the-web-2?ex=4"],["^ ","^Q","VideoExercise","^D","APIs and interacting with the world wide web","^1R",50,"^1K",5,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/interacting-with-apis-to-import-data-from-the-web-2?ex=5"],["^ ","^Q","PureMultipleChoiceExercise","^D","Pop quiz: What's an API?","^1R",50,"^1K",6,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/interacting-with-apis-to-import-data-from-the-web-2?ex=6"],["^ ","^Q","NormalExercise","^D","API requests","^1R",100,"^1K",7,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/interacting-with-apis-to-import-data-from-the-web-2?ex=7"],["^ ","^Q","NormalExercise","^D","JSON\xe2\x80\x93from the web to Python","^1R",100,"^1K",8,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/interacting-with-apis-to-import-data-from-the-web-2?ex=8"],["^ ","^Q","NormalExercise","^D","Checking out the Wikipedia API","^1R",100,"^1K",9,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/interacting-with-apis-to-import-data-from-the-web-2?ex=9"]]],["^ ","id",4140,"^1J",null,"^D","Diving deep into the Twitter API","^E","In this chapter, you will consolidate your knowledge of interacting with APIs in a deep dive into the Twitter streaming API. You'll learn how to stream real-time Twitter data, and how to analyze and visualize it.","^1K",3,"^K","diving-deep-into-the-twitter-api","^1L",8,"^1M","https://assets.datacamp.com/production/default/badges/missing.png","^1N","https://assets.datacamp.com/production/default/badges/missing_unc.png","^N","06/11/2020","^1O","https://s3.amazonaws.com/assets.datacamp.com/production/course_1606/slides/chapter3.pdf","^1P",null,"xp",700,"^1Q",2,"^:",[["^ ","^Q","VideoExercise","^D","The Twitter API and Authentication","^1R",50,"^1K",1,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/diving-deep-into-the-twitter-api?ex=1"],["^ ","^Q","NormalExercise","^D","API Authentication","^1R",100,"^1K",2,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/diving-deep-into-the-twitter-api?ex=2"],["^ ","^Q","NormalExercise","^D","Streaming tweets","^1R",100,"^1K",3,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/diving-deep-into-the-twitter-api?ex=3"],["^ ","^Q","NormalExercise","^D","Load and explore your Twitter data","^1R",100,"^1K",4,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/diving-deep-into-the-twitter-api?ex=4"],["^ ","^Q","NormalExercise","^D","Twitter data to DataFrame","^1R",100,"^1K",5,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/diving-deep-into-the-twitter-api?ex=5"],["^ ","^Q","NormalExercise","^D","A little bit of Twitter text analysis","^1R",100,"^1K",6,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/diving-deep-into-the-twitter-api?ex=6"],["^ ","^Q","NormalExercise","^D","Plotting your Twitter data","^1R",100,"^1K",7,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/diving-deep-into-the-twitter-api?ex=7"],["^ ","^Q","VideoExercise","^D","Final Thoughts","^1R",50,"^1K",8,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/diving-deep-into-the-twitter-api?ex=8"]]]]]]]],"^6",["^0",["^ ","n","PreFetchedRequestRecord","v",["^ ","^B","SUCCESS","^C",["^ ","id",4135,"^1J",null,"^D","Importing data from the Internet","^E","The web is a rich source of data from which you can extract various types of insights and findings. In this chapter, you will learn how to get data from the web, whether it is stored in files or in HTML. You'll also learn the basics of scraping and parsing web data.","^1K",1,"^K","importing-data-from-the-internet-1","^1L",12,"^1M","https://assets.datacamp.com/production/default/badges/missing.png","^1N","https://assets.datacamp.com/production/default/badges/missing_unc.png","^N","06/11/2020","^1O","https://s3.amazonaws.com/assets.datacamp.com/production/course_1606/slides/chapter1.pdf","^1P",true,"xp",1050,"^1Q",3,"^:",[["^ ","^Q","VideoExercise","^D","Importing flat files from the web","^1R",50,"^1K",1,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=1"],["^ ","^Q","NormalExercise","^D","Importing flat files from the web: your turn!","^1R",100,"^1K",2,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=2"],["^ ","^Q","NormalExercise","^D","Opening and reading flat files from the web","^1R",100,"^1K",3,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=3"],["^ ","^Q","NormalExercise","^D","Importing non-flat files from the web","^1R",100,"^1K",4,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=4"],["^ ","^Q","VideoExercise","^D","HTTP requests to import files from the web","^1R",50,"^1K",5,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=5"],["^ ","^Q","NormalExercise","^D","Performing HTTP requests in Python using urllib","^1R",100,"^1K",6,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=6"],["^ ","^Q","NormalExercise","^D","Printing HTTP request results in Python using urllib","^1R",100,"^1K",7,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=7"],["^ ","^Q","NormalExercise","^D","Performing HTTP requests in Python using requests","^1R",100,"^1K",8,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=8"],["^ ","^Q","VideoExercise","^D","Scraping the web in Python","^1R",50,"^1K",9,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=9"],["^ ","^Q","NormalExercise","^D","Parsing HTML with BeautifulSoup","^1R",100,"^1K",10,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=10"],["^ ","^Q","NormalExercise","^D","Turning a webpage into data using BeautifulSoup: getting the text","^1R",100,"^1K",11,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=11"],["^ ","^Q","NormalExercise","^D","Turning a webpage into data using BeautifulSoup: getting the hyperlinks","^1R",100,"^1K",12,"url","https://campus.datacamp.com/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=12"]]]]]],"^:",["^0",["^ ","n","PreFetchedRequestRecord","v",["^ ","^B","SUCCESS","^C",[["^ ","id",990668,"^Q","VideoExercise","assignment",null,"^D","Importing flat files from the web","sample_code","","instructions",null,"^1K",1,"sct","","pre_exercise_code","","solution","","hint",null,"attachments",null,"xp",50,"possible_answers",[],"feedbacks",[],"question","","video_link",null,"video_hls",null,"aspect_ratio",56.25,"projector_key","course_1606_59604c018a6e132016cd26144a12fee0","key","e36457c7ed","language","python","course_id",1606,"chapter_id",4135,"^13",null,"version","v0","randomNumber",0.9009897811570702,"externalId",990668],["^ ","id",42707,"^Q","NormalExercise","^1S","<p>You are about to import your first file from the web! The flat file you will import will be <code>'winequality-red.csv'</code> from the University of California, Irvine's <a href=\\\\"http://archive.ics.uci.edu/ml/index.html\\\\">Machine Learning repository</a>. The flat file contains tabular data of physiochemical properties of red wine, such as pH, alcohol content and citric acid content, along with wine quality rating.</p>\\\\n<p>The URL of the file is</p>\\\\n<pre><code>'https://s3.amazonaws.com/assets.datacamp.com/production/course_1606/datasets/winequality-red.csv'\\\\n</code></pre>\\\\n<p>After you import it, you'll check your working directory to confirm that it is there and then you'll load it into a <code>pandas</code> DataFrame.</p>","^D","Importing flat files from the web: your turn!","^1T","# Import package\\\\nfrom ____ import ____\\\\n\\\\n# Import pandas\\\\nimport pandas as pd\\\\n\\\\n# Assign url of file: url\\\\n\\\\n\\\\n# Save file locally\\\\n\\\\n\\\\n# Read file into a DataFrame and print its head\\\\ndf = pd.read_csv('winequality-red.csv', sep=';')\\\\nprint(df.head())","^1U","<ul>\\\\n<li>Import the function <code>urlretrieve</code> from the subpackage <code>urllib.request</code>.</li>\\\\n<li>Assign the URL of the file to the variable <code>url</code>.</li>\\\\n<li>Use the function <code>urlretrieve()</code> to save the file locally as <code>'winequality-red.csv'</code>.</li>\\\\n<li>Execute the remaining code to load <code>'winequality-red.csv'</code> in a pandas DataFrame and to print its head to the shell.</li>\\\\n</ul>","^1K",2,"sct","Ex().has_import(\\\\"urllib.request.urlretrieve\\\\")\\\\nEx().has_import(\\\\"pandas\\\\")\\\\nEx().check_object(\\\\"url\\\\").has_equal_value()\\\\nEx().check_function(\\\\"urllib.request.urlretrieve\\\\").multi(\\\\n check_args(0).has_equal_value(),\\\\n check_args(1).has_equal_value()\\\\n)\\\\nEx().check_correct(\\\\n check_object(\\\\"df\\\\").has_equal_value(),\\\\n check_function(\\\\"pandas.read_csv\\\\").multi(\\\\n check_args(0).has_equal_value(),\\\\n check_args(1).has_equal_value()\\\\n )\\\\n)\\\\nEx().has_printout(0)\\\\nsuccess_msg(\\\\"Awesome!\\\\")\\\\n","^1V","","^1W","# Import package\\\\nfrom urllib.request import urlretrieve\\\\n\\\\n# Import pandas\\\\nimport pandas as pd\\\\n\\\\n# Assign url of file: url\\\\nurl = 'https://s3.amazonaws.com/assets.datacamp.com/production/course_1606/datasets/winequality-red.csv'\\\\n\\\\n# Save file locally\\\\nurlretrieve(url, 'winequality-red.csv')\\\\n\\\\n# Read file into a DataFrame and print its head\\\\ndf = pd.read_csv('winequality-red.csv', sep=';')\\\\nprint(df.head())","^1X","<ul>\\\\n<li>To import a function <code>y</code> from a subpackage <code>x</code>, execute <code>from x import y</code>.</li>\\\\n<li>This one's a long URL. Make sure you typed it in correctly!</li>\\\\n<li>Pass the <em>url</em> to import (in the <code>url</code> object you defined) as the first argument and the <em>filename</em> for saving the file locally as the second argument to <code>urlretrieve()</code>.</li>\\\\n<li>You don't have to change the code for loading <code>'winequality-red.csv'</code> and printing its head.</li>\\\\n</ul>","^1Y",null,"xp",100,"^1Z",[],"^1[",[],"^20","","^25","python","^29",0.3505292251150971,"^2:",42707],["^ ","id",42708,"^Q","NormalExercise","^1S","<p>You have just imported a file from the web, saved it locally and loaded it into a DataFrame. If you just wanted to load a file from the web into a DataFrame without first saving it locally, you can do that easily using <code>pandas</code>. In particular, you can use the function <code>pd.read_csv()</code> with the URL as the first argument and the separator <code>sep</code> as the second argument.</p>\\\\n<p>The URL of the file, once again, is</p>\\\\n<pre><code>'https://s3.amazonaws.com/assets.datacamp.com/production/course_1606/datasets/winequality-red.csv'\\\\n</code></pre>","^D","Opening and reading flat files from the web","^1T","# Import packages\\\\nimport matplotlib.pyplot as plt\\\\nimport pandas as pd\\\\n\\\\n# Assign url of file: url\\\\n\\\\n\\\\n# Read file into a DataFrame: df\\\\n\\\\n\\\\n# Print the head of the DataFrame\\\\nprint(____)\\\\n\\\\n# Plot first column of df\\\\npd.DataFrame.hist(df.ix[:, 0:1])\\\\nplt.xlabel('fixed acidity (g(tartaric acid)/dm$^3$)')\\\\nplt.ylabel('count')\\\\nplt.show()\\\\n","^1U","<ul>\\\\n<li>Assign the URL of the file to the variable <code>url</code>.</li>\\\\n<li>Read file into a DataFrame <code>df</code> using <code>pd.read_csv()</code>, recalling that the separator in the file is <code>';'</code>.</li>\\\\n<li>Print the head of the DataFrame <code>df</code>.</li>\\\\n<li>Execute the rest of the code to plot histogram of the first feature in the DataFrame <code>df</code>.</li>\\\\n</ul>","^1K",3,"sct","Ex().has_import(\\\\"matplotlib.pyplot\\\\")\\\\nEx().has_import(\\\\"pandas\\\\")\\\\nEx().check_object(\\\\"url\\\\").has_equal_value()\\\\nEx().check_correct(\\\\n check_object(\\\\"df\\\\").has_equal_value(),\\\\n check_function(\\\\"pandas.read_csv\\\\").multi(\\\\n check_args(0).has_equal_value(),\\\\n check_args(1).has_equal_value()\\\\n )\\\\n)\\\\nEx().has_printout(0)\\\\nEx().check_function(\\\\"pandas.DataFrame.hist\\\\").check_args(0).has_equal_value()\\\\nEx().check_function(\\\\"matplotlib.pyplot.show\\\\")\\\\n\\\\nsuccess_msg(\\\\"Awesome!\\\\")\\\\n","^1V","","^1W","# Import packages\\\\nimport matplotlib.pyplot as plt\\\\nimport pandas as pd\\\\n\\\\n# Assign url of file: url\\\\nurl = 'https://s3.amazonaws.com/assets.datacamp.com/production/course_1606/datasets/winequality-red.csv'\\\\n\\\\n# Read file into a DataFrame: df\\\\ndf = pd.read_csv(url, sep=';')\\\\n\\\\n# Print the head of the DataFrame\\\\nprint(df.head())\\\\n\\\\n# Plot first column of df\\\\npd.DataFrame.hist(df.ix[:, 0:1])\\\\nplt.xlabel('fixed acidity (g(tartaric acid)/dm$^3$)')\\\\nplt.ylabel('count')\\\\nplt.show()\\\\n","^1X","<ul>\\\\n<li>Make sure you typed the URL correctly!</li>\\\\n<li>Pass the <em>url</em> (the <code>url</code> object you defined) as the first argument and the <em>separator</em> as the second argument to <code>pd.read_csv()</code>.</li>\\\\n<li>The <em>head</em> of a DataFrame can be accessed by using <code>head()</code> on the DataFrame.</li>\\\\n<li>You don't have to change any of the code for plotting the histograms.</li>\\\\n</ul>","^1Y",null,"xp",100,"^1Z",[],"^1[",[],"^20","","^25","python","^29",0.10077051694782435,"^2:",42708],["^ ","id",42709,"^Q","NormalExercise","^1S","<p>Congrats! You've just loaded a flat file from the web into a DataFrame without first saving it locally using the <code>pandas</code> function <code>pd.read_csv()</code>. This function is super cool because it has close relatives that allow you to load all types of files, not only flat ones. In this interactive exercise, you'll use <code>pd.read_excel()</code> to import an Excel spreadsheet.</p>\\\\n<p>The URL of the spreadsheet is</p>\\\\n<pre><code>'http://s3.amazonaws.com/assets.datacamp.com/course/importing_data_into_r/latitude.xls'\\\\n</code></pre>\\\\n<p>Your job is to use <code>pd.read_excel()</code> to read in all of its sheets, print the sheet names and then print the head of the first sheet <em>using its name, not its index</em>.</p>\\\\n<p>Note that the output of <code>pd.read_excel()</code> is a Python dictionary with sheet names as keys and corresponding DataFrames as corresponding values.</p>","^D","Importing non-flat files from the web","^1T","# Import package\\\\nimport pandas as pd\\\\n\\\\n# Assign url of file: url\\\\n\\\\n\\\\n# Read in all sheets of Excel file: xls\\\\n\\\\n\\\\n# Print the sheetnames to the shell\\\\n\\\\n\\\\n# Print the head of the first sheet (using its name, NOT its index)\\\\n\\\\n","^1U","<ul>\\\\n<li>Assign the URL of the file to the variable <code>url</code>.</li>\\\\n<li>Read the file in <code>url</code> into a dictionary <code>xls</code> using <code>pd.read_excel()</code> recalling that, in order to import all sheets you need to pass <code>None</code> to the argument <code>sheet_name</code>.</li>\\\\n<li>Print the names of the sheets in the Excel spreadsheet; these will be the keys of the dictionary <code>xls</code>.</li>\\\\n<li>Print the head of the first sheet <em>using the sheet name, not the index of the sheet</em>! The sheet name is <code>'1700'</code></li>\\\\n</ul>","^1K",4,"sct","Ex().has_import('pandas')\\\\nEx().check_correct(\\\\n has_printout(0),\\\\n multi(\\\\n check_correct(\\\\n check_object('xls').is_instance(dict),\\\\n check_correct(\\\\n check_function('pandas.read_excel').multi(\\\\n check_args(0).has_equal_value(),\\\\n check_args('sheet_name').has_equal_value()\\\\n ),\\\\n check_object('url').has_equal_value()\\\\n )\\\\n )\\\\n )\\\\n)\\\\nEx().has_printout(1)\\\\nsuccess_msg(\\\\"Awesome!\\\\")","^1V","","^1W","# Import package\\\\nimport pandas as pd\\\\n\\\\n# Assign url of file: url\\\\nurl = 'http://s3.amazonaws.com/assets.datacamp.com/course/importing_data_into_r/latitude.xls'\\\\n\\\\n# Read in all sheets of Excel file: xls\\\\nxls = pd.read_excel(url, sheet_name=None)\\\\n\\\\n# Print the sheetnames to the shell\\\\nprint(xls.keys())\\\\n\\\\n# Print the head of the first sheet (using its name, NOT its index)\\\\nprint(xls['1700'].head())","^1X","<ul>\\\\n<li>Make sure you typed in the URL correctly!</li>\\\\n<li>Pass the <em>url</em> (the <code>url</code> object you defined) as the first argument and <code>sheet_name</code> with its corresponding value as the second argument to <code>pd.read_excel()</code>.</li>\\\\n<li>The <em>keys</em> of a dictionary can be accessed by using <code>keys()</code> on the dictionary.</li>\\\\n<li>You can access a sheet using the format: <em>dictionary</em><strong>[</strong><em>sheet name or index</em><strong>]</strong>.</li>\\\\n</ul>","^1Y",null,"xp",100,"^1Z",[],"^1[",[],"^20","","^25","python","^29",0.7419977198305243,"^2:",42709],["^ ","id",990669,"^Q","VideoExercise","^1S",null,"^D","HTTP requests to import files from the web","^1T","","^1U",null,"^1K",5,"sct","","^1V","","^1W","","^1X",null,"^1Y",null,"xp",50,"^1Z",[],"^1[",[],"^20","","^21",null,"^22",null,"^23",56.25,"^24","course_1606_9d15ae176be1800b996f7869a82b8087","key","e480d1fdcf","^25","python","^26",1606,"^27",4135,"^13",null,"^28","v0","^29",0.9433112374621455,"^2:",990669],["^ ","id",42711,"^Q","NormalExercise","^1S","<p>Now that you know the basics behind HTTP GET requests, it's time to perform some of your own. In this interactive exercise, you will ping our very own DataCamp servers to perform a GET request to extract information from the first coding exercise of this course, <code>\\\\"https://campus.datacamp.com/courses/1606/4135?ex=2\\\\"</code>.</p>\\\\n<p>In the next exercise, you'll extract the HTML itself. Right now, however, you are going to package and send the request and then catch the response.</p>","^D","Performing HTTP requests in Python using urllib","^1T","# Import packages\\\\n\\\\n\\\\n# Specify the url\\\\nurl = \\\\"https://campus.datacamp.com/courses/1606/4135?ex=2\\\\"\\\\n\\\\n# This packages the request: request\\\\n\\\\n\\\\n# Sends the request and catches the response: response\\\\n\\\\n\\\\n# Print the datatype of response\\\\nprint(type(response))\\\\n\\\\n# Be polite and close the response!\\\\nresponse.close()\\\\n","^1U","<ul>\\\\n<li>Import the functions <code>urlopen</code> and <code>Request</code> from the subpackage <code>urllib.request</code>.</li>\\\\n<li>Package the request to the url <code>\\\\"https://campus.datacamp.com/courses/1606/4135?ex=2\\\\"</code> using the function <code>Request()</code> and assign it to <code>request</code>.</li>\\\\n<li>Send the request and catch the response in the variable <code>response</code> with the function <code>urlopen()</code>.</li>\\\\n<li>Run the rest of the code to see the datatype of <code>response</code> and to close the connection!</li>\\\\n</ul>","^1K",6,"sct","\\\\n# Test: import urlopen, Request\\\\nimport_msg = \\\\"Did you correctly import the required packages?\\\\"\\\\nEx().has_import(\\\\n \\\\"urllib.request.urlopen\\\\",\\\\n not_imported_msg=import_msg\\\\n)\\\\nEx().has_import(\\\\n \\\\"urllib.request.Request\\\\",\\\\n not_imported_msg=import_msg\\\\n)\\\\n\\\\n# Test: Predefined code\\\\npredef_msg = \\\\"You don't have to change any of the predefined code.\\\\"\\\\nEx().check_object(\\\\"url\\\\", missing_msg=predef_msg).has_equal_value(incorrect_msg = predef_msg)\\\\n\\\\n# Test: call to Request() and 'request' variable\\\\nEx().check_function(\\\\"urllib.request.Request\\\\").check_args(0).has_equal_value()\\\\nEx().check_object(\\\\"request\\\\")\\\\n \\\\n# Test: call to urlopen() and 'response' variable\\\\nEx().check_function(\\\\"urllib.request.urlopen\\\\").check_args(0).has_equal_ast()\\\\nEx().check_object(\\\\"response\\\\"),\\\\n\\\\n# Test: Predefined code\\\\nEx().has_printout(0)\\\\nEx().check_function(\\\\"response.close\\\\")\\\\n\\\\nsuccess_msg(\\\\"Awesome!\\\\")\\\\n","^1V","","^1W","# Import packages\\\\nfrom urllib.request import urlopen, Request\\\\n\\\\n# Specify the url\\\\nurl = \\\\"https://campus.datacamp.com/courses/1606/4135?ex=2\\\\"\\\\n\\\\n# This packages the request: request\\\\nrequest = Request(url)\\\\n\\\\n# Sends the request and catches the response: response\\\\nresponse = urlopen(request)\\\\n\\\\n# Print the datatype of response\\\\nprint(type(response))\\\\n\\\\n# Be polite and close the response!\\\\nresponse.close()\\\\n","^1X","<ul>\\\\n<li>To import two functions in one line, import the first function as usual and add a comma <code>,</code> followed by the second function.</li>\\\\n<li>Pass the <em>url</em> (already in the <code>url</code> object defined) as an argument to <code>Request()</code>.</li>\\\\n<li>Pass <code>request</code> as an argument to <code>urlopen()</code>.</li>\\\\n<li>You don't have to modify the code for printing the datatype of <code>response</code> and closing the connection.</li>\\\\n</ul>","^1Y",null,"xp",100,"^1Z",[],"^1[",[],"^20","","^25","python","^29",0.025326719030948963,"^2:",42711],["^ ","id",42712,"^Q","NormalExercise","^1S","<p>You have just packaged and sent a GET request to <code>\\\\"https://campus.datacamp.com/courses/1606/4135?ex=2\\\\"</code> and then caught the response. You saw that such a response is a <code>http.client.HTTPResponse</code> object. The question remains: what can you do with this response?</p>\\\\n<p>Well, as it came from an HTML page, you could <em>read</em> it to extract the HTML and, in fact, such a <code>http.client.HTTPResponse</code> object has an associated <code>read()</code> method. In this exercise, you'll build on your previous great work to extract the response and print the HTML.</p>","^D","Printing HTTP request results in Python using urllib","^1T","# Import packages\\\\nfrom urllib.request import urlopen, Request\\\\n\\\\n# Specify the url\\\\nurl = \\\\"https://campus.datacamp.com/courses/1606/4135?ex=2\\\\"\\\\n\\\\n# This packages the request\\\\nrequest = Request(url)\\\\n\\\\n# Sends the request and catches the response: response\\\\n\\\\n\\\\n# Extract the response: html\\\\n\\\\n\\\\n# Print the html\\\\n\\\\n\\\\n# Be polite and close the response!\\\\nresponse.close()","^1U","<ul>\\\\n<li>Send the request and catch the response in the variable <code>response</code> with the function <code>urlopen()</code>, as in the previous exercise.</li>\\\\n<li>Extract the response using the <code>read()</code> method and store the result in the variable <code>html</code>.</li>\\\\n<li>Print the string <code>html</code>.</li>\\\\n<li>Hit submit to perform all of the above and to close the response: be tidy!</li>\\\\n</ul>","^1K",7,"sct","\\\\n# Test: Predefined code\\\\npredef_msg = \\\\"You don't have to change any of the predefined code.\\\\"\\\\nEx().has_import(\\\\n \\\\"urllib.request.urlopen\\\\",\\\\n not_imported_msg=predef_msg\\\\n)\\\\n\\\\nEx().has_import(\\\\n \\\\"urllib.request.Request\\\\",\\\\n not_imported_msg=predef_msg\\\\n)\\\\n\\\\nEx().check_object(\\\\"url\\\\").has_equal_value()\\\\n\\\\n# Test: call to Request() and 'request' variable\\\\nEx().check_function(\\\\"urllib.request.Request\\\\").check_args(0).has_equal_value()\\\\nEx().check_object(\\\\"request\\\\")\\\\n\\\\n# Test: call to urlopen() and 'response' variable\\\\nEx().check_function(\\\\"urllib.request.urlopen\\\\").check_args(0).has_equal_ast()\\\\nEx().check_object(\\\\"response\\\\")\\\\n\\\\n# Test: call to urlopen() and 'response' variable\\\\nEx().check_function(\\\\"response.read\\\\")\\\\nEx().check_object(\\\\"html\\\\")\\\\n\\\\n# Test: call to print()\\\\nEx().check_function('print').check_args(0).has_equal_ast()\\\\n\\\\n# Test: Predefined code\\\\nEx().check_function(\\\\"response.close\\\\")\\\\n\\\\nsuccess_msg(\\\\"Awesome!\\\\")\\\\n","^1V","","^1W","# Import packages\\\\nfrom urllib.request import urlopen, Request\\\\n\\\\n# Specify the url\\\\nurl = \\\\"https://campus.datacamp.com/courses/1606/4135?ex=2\\\\"\\\\n\\\\n# This packages the request\\\\nrequest = Request(url)\\\\n\\\\n# Sends the request and catches the response: response\\\\nresponse = urlopen(request)\\\\n\\\\n# Extract the response: html\\\\nhtml = response.read()\\\\n\\\\n# Print the html\\\\nprint(html)\\\\n\\\\n# Be polite and close the response!\\\\nresponse.close()","^1X","<ul>\\\\n<li>Pass <code>request</code> as an argument to <code>urlopen()</code>.</li>\\\\n<li>Apply the method <code>read()</code> to the response object <code>response</code>.</li>\\\\n<li>Simply pass <code>html</code> to the <code>print()</code> function.</li>\\\\n<li>You don't have to modify the code for closing the response.</li>\\\\n</ul>","^1Y",null,"xp",100,"^1Z",[],"^1[",[],"^20","","^25","python","^29",0.4368582772187055,"^2:",42712],["^ ","id",42713,"^Q","NormalExercise","^1S","<p>Now that you've got your head and hands around making HTTP requests using the urllib package, you're going to figure out how to do the same using the higher-level requests library. You'll once again be pinging DataCamp servers for their <code>\\\\"http://www.datacamp.com/teach/documentation\\\\"</code> page.</p>\\\\n<p>Note that unlike in the previous exercises using urllib, you don't have to close the connection when using requests!</p>","^D","Performing HTTP requests in Python using requests","^1T","# Import package\\\\n\\\\n\\\\n# Specify the url: url\\\\n\\\\n\\\\n# Packages the request, send the request and catch the response: r\\\\n\\\\n\\\\n# Extract the response: text\\\\n\\\\n\\\\n# Print the html\\\\nprint(text)","^1U","<ul>\\\\n<li>Import the package <code>requests</code>.</li>\\\\n<li>Assign the URL of interest to the variable <code>url</code>.</li>\\\\n<li>Package the request to the URL, send the request and catch the response with a single function <code>requests.get()</code>, assigning the response to the variable <code>r</code>.</li>\\\\n<li>Use the <code>text</code> attribute of the object <code>r</code> to return the HTML of the webpage as a string; store the result in a variable <code>text</code>.</li>\\\\n<li>Hit submit to print the HTML of the webpage.</li>\\\\n</ul>","^1K",8,"sct","\\\\n# Test: import requests\\\\nEx().has_import(\\\\"requests\\\\")\\\\n\\\\n# Test: 'url' variable\\\\nEx().check_object(\\\\"url\\\\").has_equal_value()\\\\n\\\\n# Test: call to requests.get() and 'r' variable\\\\nEx().check_function(\\\\"requests.get\\\\").check_args(0).has_equal_value()\\\\nEx().check_object(\\\\"r\\\\")\\\\n\\\\n# Test: 'text' variable\\\\nEx().has_code(\\\\"r.text\\\\", pattern = False, not_typed_msg=\\\\"Have you used `r.text` to create `text`?\\\\")\\\\nEx().check_object(\\\\"text\\\\")\\\\n\\\\n# Test: Predefined code\\\\nEx().check_function('print').check_args(0).has_equal_ast()\\\\n\\\\nsuccess_msg(\\\\"Awesome!\\\\")\\\\n","^1V","","^1W","# Import package\\\\nimport requests\\\\n\\\\n# Specify the url: url\\\\nurl = \\\\"http://www.datacamp.com/teach/documentation\\\\"\\\\n\\\\n# Packages the request, send the request and catch the response: r\\\\nr = requests.get(url)\\\\n\\\\n# Extract the response: text\\\\ntext = r.text\\\\n\\\\n# Print the html\\\\nprint(text)","^1X","<ul>\\\\n<li>To import a package <code>x</code>, execute <code>import x</code>.</li>\\\\n<li>Did you type in the URL correctly?</li>\\\\n<li>Pass the <em>url</em> (the <code>url</code> object you defined) as an argument to <code>requests.get()</code>.</li>\\\\n<li>You can access the <code>text</code> attribute of the object <code>r</code> by executing <code>r.text</code>.</li>\\\\n<li>You don't have to modify the code for printing the HTML of the webpage.</li>\\\\n</ul>","^1Y",null,"xp",100,"^1Z",[],"^1[",[],"^20","","^25","python","^29",0.3464698092388834,"^2:",42713],["^ ","id",990670,"^Q","VideoExercise","^1S",null,"^D","Scraping the web in Python","^1T","","^1U",null,"^1K",9,"sct","","^1V","","^1W","","^1X",null,"^1Y",null,"xp",50,"^1Z",[],"^1[",[],"^20","","^21",null,"^22",null,"^23",56.25,"^24","course_1606_9d1f8a331d1200c7e1bdbfcaf3a7a491","key","da43858012","^25","python","^26",1606,"^27",4135,"^13",null,"^28","v0","^29",0.8666582036246655,"^2:",990670],["^ ","id",42715,"^Q","NormalExercise","^1S","<p>In this interactive exercise, you'll learn how to use the BeautifulSoup package to <em>parse</em>, <em>prettify</em> and <em>extract</em> information from HTML. You'll scrape the data from the webpage of Guido van Rossum, Python's very own <a href=\\\\"https://en.wikipedia.org/wiki/Benevolent_dictator_for_life\\\\">Benevolent Dictator for Life</a>. In the following exercises, you'll prettify the HTML and then extract the text and the hyperlinks.</p>\\\\n<p>The URL of interest is <code>url = 'https://www.python.org/~guido/'</code>.</p>","^D","Parsing HTML with BeautifulSoup","^1T","# Import packages\\\\nimport requests\\\\nfrom ____ import ____\\\\n\\\\n# Specify url: url\\\\n\\\\n\\\\n# Package the request, send the request and catch the response: r\\\\n\\\\n\\\\n# Extracts the response as html: html_doc\\\\n\\\\n\\\\n# Create a BeautifulSoup object from the HTML: soup\\\\n\\\\n\\\\n# Prettify the BeautifulSoup object: pretty_soup\\\\n\\\\n\\\\n# Print the response\\\\nprint(pretty_soup)","^1U","<ul>\\\\n<li>Import the function <code>BeautifulSoup</code> from the package <code>bs4</code>.</li>\\\\n<li>Assign the URL of interest to the variable <code>url</code>.</li>\\\\n<li>Package the request to the URL, send the request and catch the response with a single function <code>requests.get()</code>, assigning the response to the variable <code>r</code>.</li>\\\\n<li>Use the <code>text</code> attribute of the object <code>r</code> to return the HTML of the webpage as a string; store the result in a variable <code>html_doc</code>.</li>\\\\n<li>Create a BeautifulSoup object <code>soup</code> from the resulting HTML using the function <code>BeautifulSoup()</code>.</li>\\\\n<li>Use the method <code>prettify()</code> on <code>soup</code> and assign the result to <code>pretty_soup</code>.</li>\\\\n<li>Hit submit to print to prettified HTML to your shell!</li>\\\\n</ul>","^1K",10,"sct","# Test: Predefined code\\\\npredef_msg = \\\\"You don't have to change any of the predefined code.\\\\"\\\\nEx().has_import(\\\\n \\\\"requests\\\\",\\\\n not_imported_msg=predef_msg\\\\n)\\\\n\\\\n# Test: import BeautifulSoup\\\\nimport_msg = \\\\"Did you correctly import the required packages?\\\\"\\\\nEx().has_import(\\\\n \\\\"bs4.BeautifulSoup\\\\",\\\\n not_imported_msg=import_msg\\\\n)\\\\n\\\\n# Test: 'url' variable\\\\nEx().check_object(\\\\"url\\\\").has_equal_value()\\\\n\\\\n# Test: call to requests.get() and 'r' variable\\\\nEx().check_function(\\\\"requests.get\\\\").check_args(0).has_equal_value()\\\\nEx().check_object(\\\\"r\\\\")\\\\n\\\\n\\\\n# Test: 'html_doc' variable\\\\nEx().check_correct(\\\\n check_object(\\\\"html_doc\\\\").has_equal_value(),\\\\n has_code(\\\\"r.text\\\\", pattern = False, not_typed_msg=\\\\"Have you used `r.text` to create `html_doc`?\\\\")\\\\n)\\\\n\\\\n# Test: call to BeautifulSoup() and 'soup' variable\\\\nEx().check_correct(\\\\n check_object(\\\\"soup\\\\").has_equal_value(),\\\\n check_function(\\\\"bs4.BeautifulSoup\\\\").check_args(0).has_equal_value()\\\\n )\\\\n\\\\n# Test: call to prettify() and 'pretty_soup' variable\\\\nEx().check_correct(\\\\n check_object(\\\\"pretty_soup\\\\").has_equal_value(),\\\\n check_function(\\\\"soup.prettify\\\\")\\\\n )\\\\n\\\\n# Test: Predefined code\\\\nEx().has_printout(0)\\\\n\\\\nsuccess_msg(\\\\"Awesome!\\\\")\\\\n","^1V","","^1W","# Import packages\\\\nimport requests\\\\nfrom bs4 import BeautifulSoup\\\\n\\\\n# Specify url: url\\\\nurl = 'https://www.python.org/~guido/'\\\\n\\\\n# Package the request, send the request and catch the response: r\\\\nr = requests.get(url)\\\\n\\\\n# Extracts the response as html: html_doc\\\\nhtml_doc = r.text\\\\n\\\\n# Create a BeautifulSoup object from the HTML: soup\\\\nsoup = BeautifulSoup(html_doc)\\\\n\\\\n# Prettify the BeautifulSoup object: pretty_soup\\\\npretty_soup = soup.prettify()\\\\n\\\\n# Print the response\\\\nprint(pretty_soup)","^1X","<ul>\\\\n<li>To import a function <code>y</code> from a package <code>x</code>, execute <code>from x import y</code>.</li>\\\\n<li>Check the URL to make sure that you typed it in correctly.</li>\\\\n<li>Pass the <em>url</em> (the <code>url</code> object you defined) as an argument to <code>requests.get()</code>.</li>\\\\n<li>You can access the <code>text</code> attribute of the object <code>r</code> by executing <code>r.text</code>.</li>\\\\n<li>Pass the extracted <em>HTML</em> as an argument to <code>BeautifulSoup()</code>.</li>\\\\n<li>To use the <code>prettify()</code> method on the BeautifulSoup object <code>soup</code>, execute <code>soup.prettify()</code>.</li>\\\\n<li>You don't have to modify the code to print the prettified HTML.</li>\\\\n</ul>","^1Y",null,"xp",100,"^1Z",[],"^1[",[],"^20","","^25","python","^29",0.2142961690812859,"^2:",42715],["^ ","id",42716,"^Q","NormalExercise","^1S","<p>As promised, in the following exercises, you'll learn the basics of extracting information from HTML soup. In this exercise, you'll figure out how to extract the text from the BDFL's webpage, along with printing the webpage's title.</p>","^D","Turning a webpage into data using BeautifulSoup: getting the text","^1T","# Import packages\\\\nimport requests\\\\nfrom bs4 import BeautifulSoup\\\\n\\\\n# Specify url: url\\\\nurl = 'https://www.python.org/~guido/'\\\\n\\\\n# Package the request, send the request and catch the response: r\\\\nr = requests.get(url)\\\\n\\\\n# Extract the response as html: html_doc\\\\nhtml_doc = r.text\\\\n\\\\n# Create a BeautifulSoup object from the HTML: soup\\\\n\\\\n\\\\n# Get the title of Guido's webpage: guido_title\\\\n\\\\n\\\\n# Print the title of Guido's webpage to the shell\\\\n\\\\n\\\\n# Get Guido's text: guido_text\\\\n\\\\n\\\\n# Print Guido's text to the shell\\\\nprint(guido_text)","^1U","<ul>\\\\n<li>In the sample code, the HTML response object <code>html_doc</code> has already been created: your first task is to Soupify it using the function <code>BeautifulSoup()</code> and to assign the resulting soup to the variable <code>soup</code>.</li>\\\\n<li>Extract the title from the HTML soup <code>soup</code> using the attribute <code>title</code> and assign the result to <code>guido_title</code>.</li>\\\\n<li>Print the title of Guido's webpage to the shell using the <code>print()</code> function.</li>\\\\n<li>Extract the text from the HTML soup <code>soup</code> using the method <code>get_text()</code> and assign to <code>guido_text</code>.</li>\\\\n<li>Hit submit to print the text from Guido's webpage to the shell.</li>\\\\n</ul>","^1K",11,"sct","# Test: Predefined code\\\\npredef_msg = \\\\"You don't have to change any of the predefined code.\\\\"\\\\nEx().has_import(\\\\n \\\\"requests\\\\",\\\\n not_imported_msg=predef_msg\\\\n)\\\\n\\\\n# Test: import BeautifulSoup\\\\nEx().has_import(\\\\n \\\\"bs4.BeautifulSoup\\\\",\\\\n not_imported_msg=predef_msg\\\\n)\\\\n\\\\n# Test: 'url' variable\\\\nEx().check_object(\\\\"url\\\\").has_equal_value()\\\\n\\\\n# Test: call to requests.get() and 'r' variable\\\\nEx().check_function(\\\\"requests.get\\\\").check_args(0).has_equal_value()\\\\nEx().check_object(\\\\"r\\\\")\\\\n\\\\n\\\\n# Test: 'html_doc' variable\\\\nEx().check_correct(\\\\n check_object(\\\\"html_doc\\\\").has_equal_value(),\\\\n has_code(\\\\"r.text\\\\", pattern = False, not_typed_msg=\\\\"Have you used `r.text` to create `html_doc`?\\\\")\\\\n)\\\\n\\\\n# Test: call to BeautifulSoup() and 'soup' variable\\\\nEx().check_correct(\\\\n check_object(\\\\"soup\\\\").has_equal_value(),\\\\n check_function(\\\\"bs4.BeautifulSoup\\\\").check_args(0).has_equal_value()\\\\n )\\\\n\\\\n# Test: 'guido_title' variable\\\\nEx().check_correct(\\\\n check_object(\\\\"guido_title\\\\").has_equal_value(),\\\\n has_code(\\\\"soup.title\\\\", pattern = False, not_typed_msg=\\\\"Have you used `soup.title` to create `guido_title`?\\\\")\\\\n)\\\\n\\\\n# Test: call to print()\\\\nEx().has_printout(0)\\\\n\\\\n# Test: call to soup.get_text() and 'guido_text' variable\\\\nEx().check_correct(\\\\n check_object(\\\\"guido_text\\\\").has_equal_value(),\\\\n check_function(\\\\"soup.get_text\\\\")\\\\n )\\\\n\\\\n# Test: Predefined code\\\\nEx().has_printout(1)\\\\n\\\\nsuccess_msg(\\\\"Awesome!\\\\")\\\\n","^1V","","^1W","# Import packages\\\\nimport requests\\\\nfrom bs4 import BeautifulSoup\\\\n\\\\n# Specify url: url\\\\nurl = 'https://www.python.org/~guido/'\\\\n\\\\n# Package the request, send the request and catch the response: r\\\\nr = requests.get(url)\\\\n\\\\n# Extract the response as html: html_doc\\\\nhtml_doc = r.text\\\\n\\\\n# Create a BeautifulSoup object from the HTML: soup\\\\nsoup = BeautifulSoup(html_doc)\\\\n\\\\n# Get the title of Guido's webpage: guido_title\\\\nguido_title = soup.title\\\\n\\\\n# Print the title of Guido's webpage to the shell\\\\nprint(guido_title)\\\\n\\\\n# Get Guido's text: guido_text\\\\nguido_text = soup.get_text()\\\\n\\\\n# Print Guido's text to the shell\\\\nprint(guido_text)","^1X","<ul>\\\\n<li>Pass the <em>HTML response object</em> as an argument to <code>BeautifulSoup()</code>.</li>\\\\n<li>You can access the <code>title</code> attribute of the object <code>soup</code> by executing <code>soup.title</code>.</li>\\\\n<li>The object that contains the title of Guido's webpage is <code>guido_title</code>; pass this as an argument to <code>print()</code>.</li>\\\\n<li>Use the method <code>get_text()</code> on the HTML soup <code>soup</code> by executing <code>soup.get_text()</code>.</li>\\\\n<li>You don't have to modify the code to print the text from Guido's webpage.</li>\\\\n</ul>","^1Y",null,"xp",100,"^1Z",[],"^1[",[],"^20","","^25","python","^29",0.4857854755758062,"^2:",42716],["^ ","id",42717,"^Q","NormalExercise","^1S","<p>In this exercise, you'll figure out how to extract the URLs of the hyperlinks from the BDFL's webpage. In the process, you'll become close friends with the soup method <code>find_all()</code>.</p>","^D","Turning a webpage into data using BeautifulSoup: getting the hyperlinks","^1T","# Import packages\\\\nimport requests\\\\nfrom bs4 import BeautifulSoup\\\\n\\\\n# Specify url\\\\nurl = 'https://www.python.org/~guido/'\\\\n\\\\n# Package the request, send the request and catch the response: r\\\\nr = requests.get(url)\\\\n\\\\n# Extracts the response as html: html_doc\\\\nhtml_doc = r.text\\\\n\\\\n# create a BeautifulSoup object from the HTML: soup\\\\nsoup = BeautifulSoup(html_doc)\\\\n\\\\n# Print the title of Guido's webpage\\\\nprint(soup.title)\\\\n\\\\n# Find all 'a' tags (which define hyperlinks): a_tags\\\\n\\\\n\\\\n# Print the URLs to the shell\\\\nfor ____ in ____:\\\\n ____","^1U","<ul>\\\\n<li>Use the method <code>find_all()</code> to find all hyperlinks in <code>soup</code>, remembering that hyperlinks are defined by the HTML tag <code>&lt;a&gt;</code> but passed to <code>find_all()</code> without angle brackets; store the result in the variable <code>a_tags</code>.</li>\\\\n<li>The variable <code>a_tags</code> is a results set: your job now is to enumerate over it, using a <code>for</code> loop and to print the actual URLs of the hyperlinks; to do this, for every element <code>link</code> in <code>a_tags</code>, you want to <code>print()</code> <code>link.get('href')</code>.</li>\\\\n</ul>","^1K",12,"sct","predef_msg = \\\\"You don't have to change any of the predefined code.\\\\"\\\\nEx().has_import(\\\\"requests\\\\")\\\\nEx().has_import(\\\\"bs4.BeautifulSoup\\\\")\\\\nEx().check_object(\\\\"url\\\\").has_equal_value(incorrect_msg = predef_msg)\\\\nEx().check_function(\\\\"requests.get\\\\").check_args(0).has_equal_ast()\\\\nEx().check_object(\\\\"html_doc\\\\").has_equal_value(incorrect_msg = predef_msg)\\\\nEx().check_object(\\\\"soup\\\\").has_equal_value(incorrect_msg = predef_msg)\\\\nEx().has_printout(0)\\\\n\\\\nEx().check_correct(\\\\n check_object(\\\\"a_tags\\\\"),\\\\n check_function(\\\\"soup.find_all\\\\").check_args(0).has_equal_value()\\\\n)\\\\nEx().check_for_loop().multi(\\\\n check_iter().has_equal_value(incorrect_msg = \\\\"You have to iterate over `a_tags`\\\\"),\\\\n check_body().set_context('<a href=\\\\"pics.html\\\\"><img border=\\\\"0\\\\" src=\\\\"images/IMG_2192.jpg\\\\"/></a>').check_function(\\\\"print\\\\").check_args(0).check_function(\\\\"link.get\\\\").check_args(0).has_equal_value()\\\\n )\\\\n\\\\nsuccess_msg(\\\\"Awesome!\\\\")","^1V","","^1W","# Import packages\\\\nimport requests\\\\nfrom bs4 import BeautifulSoup\\\\n\\\\n# Specify url\\\\nurl = 'https://www.python.org/~guido/'\\\\n\\\\n# Package the request, send the request and catch the response: r\\\\nr = requests.get(url)\\\\n\\\\n# Extracts the response as html: html_doc\\\\nhtml_doc = r.text\\\\n\\\\n# create a BeautifulSoup object from the HTML: soup\\\\nsoup = BeautifulSoup(html_doc)\\\\n\\\\n# Print the title of Guido's webpage\\\\nprint(soup.title)\\\\n\\\\n# Find all 'a' tags (which define hyperlinks): a_tags\\\\na_tags = soup.find_all('a')\\\\n\\\\n# Print the URLs to the shell\\\\nfor link in a_tags:\\\\n print(link.get('href'))","^1X","<ul>\\\\n<li>Pass the <em>HTML tag</em> to find (without the angle brackets <code>&lt;&gt;</code>) as a string argument to <code>find_all()</code>.</li>\\\\n<li>Recall that the <code>for</code> loop recipe is: <code>for</code> <em>loop variable</em> <code>in</code> <em>results set</em><code>:</code>. Don't forget to pass <code>link.get('href')</code> as an argument to <code>print()</code> inside the <code>for</code> loop body.</li>\\\\n</ul>","^1Y",null,"xp",100,"^1Z",[],"^1[",[],"^20","","^25","python","^29",0.8156043842203349,"^2:",42717]]]]],"activeImage",["^0",["^ ","n","PreFetchedRequestRecord","v",["^ ","^B","SUCCESS","^C","course-1606-master:506759a234ec905a9377923e00ae7511-20201106185628118"]]],"sharedImage",["^0",["^ ","n","PreFetchedRequestRecord","v",["^ ","^B","NOT_FETCHED","^C",null]]]]]],"settings",["^2",["uiTheme","DARK","feedbackRatingStatus","NONE"]],"streakInfo",["^ ","^Q","StreakUnknown"],"systemStatus",["^2",["indicator","none","description","No status has been fetched from the Status Page."]],"user",["^2",["status","not_initiate","settings",["^2",[]]]]]]]";</script><div id="root"><div class="theme progress-indicator--visible"><style data-emotion="css 1hudkvy">.css-1hudkvy{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;height:50px;padding-left:10px;padding-right:10px;position:relative;z-index:15;background-color:#f7f3eb;}</style><header data-cy="header-container" class="css-1hudkvy"><style data-emotion="css 1rr4qq7">.css-1rr4qq7{-webkit-flex:1;-ms-flex:1;flex:1;}</style><div class="css-1rr4qq7"><style data-emotion="css 15v52cw">.css-15v52cw{padding:6px;border:0;border-width:0;}</style><style data-emotion="css 19aisx0">.css-19aisx0{padding:6px;border:0;border-width:0;}</style><style data-emotion="css hrkvlq">.css-hrkvlq{-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;border:none;border-radius:4px;border-style:solid;border-width:2px;cursor:pointer;display:-webkit-inline-box;display:-webkit-inline-flex;display:-ms-inline-flexbox;display:inline-flex;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;position:relative;-webkit-text-decoration:none;text-decoration:none;text-transform:capitalize;-webkit-transition:0.15s;transition:0.15s;vertical-align:baseline;white-space:nowrap;background-color:transparent;border-color:#05192d;color:#05192d;padding:14px;padding:6px;border:0;border-width:0;}.css-hrkvlq:active{-webkit-transform:perspective(1px) scale(0.975);-moz-transform:perspective(1px) scale(0.975);-ms-transform:perspective(1px) scale(0.975);transform:perspective(1px) scale(0.975);}.css-hrkvlq:disabled,.css-hrkvlq:hover:disabled,.css-hrkvlq:active:disabled{-webkit-transform:none;-moz-transform:none;-ms-transform:none;transform:none;}.css-hrkvlq:focus{outline:0;}.css-hrkvlq:hover{background-color:rgba(5, 25, 45, 0.15);border-color:#05192d;color:#05192d;}</style><a class="css-hrkvlq" href="https://www.datacamp.com" data-cy="header-logo"><svg height="28" viewbox="0 0 173 36" width="134.55555555555554" aria-hidden="true" color="currentColor" size="18" title=""><g fill="none" fill-rule="evenodd"><path d="M42.56 27.1a5.694 5.694 0 110-11.39 5.694 5.694 0 010 11.39m5.704-20.623v8.853a8.334 8.334 0 100 12.148v1.836h2.632V6.477h-2.632zm73.28 20.622a5.694 5.694 0 110-11.389 5.694 5.694 0 010 11.39m8.333-5.695v-8.247h-2.63v2.172a8.334 8.334 0 100 12.148v1.836h2.631v-7.91h-.001zm20.987-7.634a1.296 1.296 0 011.109-.622h.507c1.075 0 1.947.872 1.947 1.947v14.218h-2.686V17.269c-1.239 2-5.674 9.25-7.003 11.424a1.296 1.296 0 01-1.108.62h-.548a1.298 1.298 0 01-1.298-1.297V17.238a1909.582 1909.582 0 00-7.31 11.954l-.074.122h-2.574v-16.16h2.684v.033l-.062 11.147 6.438-10.56a1.3 1.3 0 011.11-.622h.51c1.073 0 1.944.869 1.947 1.942 0 2.972.014 8.383.014 9.17l6.397-10.493zm-37.92 12.541a8.331 8.331 0 11.21-9.502l-2.524 1.312a5.533 5.533 0 10-.379 6.88l2.693 1.31zm51.542.8a5.693 5.693 0 01-5.68-5.352v-.682a5.694 5.694 0 115.684 6.036m0-14.028a8.298 8.298 0 00-5.684 2.24v-2.168h-2.632V35.91h2.632v-8.4a8.333 8.333 0 105.684-14.425M75.277 15.68v9.938c0 .589.478 1.067 1.067 1.067h3.064v2.629h-3.062a3.7 3.7 0 01-3.696-3.696l-.01-9.938h-2.838v-2.56h2.838V8.702h2.635v4.428h4.672v2.55h-4.67zm12.757 11.418a5.694 5.694 0 110-11.39 5.694 5.694 0 010 11.39m5.702-13.941v2.173a8.334 8.334 0 100 12.148v1.836h2.632v-16.16l-2.632.003zM60.285 27.099a5.694 5.694 0 110-11.389 5.694 5.694 0 010 11.39m5.702-13.942v2.172a8.334 8.334 0 100 12.148v1.836h2.63v-16.16l-2.63.004z" fill="#05192d"/><path d="M11.7 8.514v8.332L2.857 21.89V3.44l8.841 5.074zm2.86 17.507v-7.51l11.84-6.757-2.88-1.65-8.96 5.112V7.68a1.44 1.44 0 00-.718-1.242L3.056.256A2.066 2.066 0 000 2.07v21.184a2.067 2.067 0 002.971 1.866l.082-.042 8.64-4.932v6.72c.002.513.276.987.721 1.243L23.502 34.4l2.88-1.651L14.56 26.02z" fill="#05192d"/></g></svg></a></div><div><div class="dc-nav-course__container"><style data-emotion="css 1nxd4b6">.css-1nxd4b6{border:0;height:36px;-webkit-box-pack:initial;-ms-flex-pack:initial;-webkit-justify-content:initial;justify-content:initial;}</style><nav class="dc-nav-course css-1nxd4b6"><style data-emotion="css ftus1d">.css-ftus1d{z-index:1;border-width:2px;border-radius:4px 0px 0px 4px;}</style><style data-emotion="css 1xsr0ms">.css-1xsr0ms{-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;border:none;border-radius:4px;border-style:solid;border-width:2px;cursor:pointer;display:-webkit-inline-box;display:-webkit-inline-flex;display:-ms-inline-flexbox;display:inline-flex;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;position:relative;-webkit-text-decoration:none;text-decoration:none;text-transform:capitalize;-webkit-transition:0.15s;transition:0.15s;vertical-align:baseline;white-space:nowrap;background-color:transparent;border-color:#05192d;color:#05192d;padding:8px;z-index:1;border-width:2px;border-radius:4px 0px 0px 4px;}.css-1xsr0ms:active{-webkit-transform:perspective(1px) scale(0.975);-moz-transform:perspective(1px) scale(0.975);-ms-transform:perspective(1px) scale(0.975);transform:perspective(1px) scale(0.975);}.css-1xsr0ms:disabled,.css-1xsr0ms:hover:disabled,.css-1xsr0ms:active:disabled{-webkit-transform:none;-moz-transform:none;-ms-transform:none;transform:none;}.css-1xsr0ms:focus{outline:0;}.css-1xsr0ms:hover{background-color:rgba(5, 25, 45, 0.15);border-color:#05192d;color:#05192d;}</style><a aria-label="Go to previous exercise" class="css-1xsr0ms" href="/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=1" data-cy="header-previous"><svg viewbox="0 0 18 18" aria-hidden="true" height="18" role="img" width="18"><path fill="currentColor" d="M4.42 8L16 7.998a1 1 0 010 2L4.41 10l3.285 3.296a.998.998 0 11-1.417 1.41l-4.93-4.948A.998.998 0 011.36 8.23l4.933-4.938a1 1 0 011.414 0c.39.391.39 1.025 0 1.416L4.42 7.999z" fill-rule="evenodd"/></svg></a><style data-emotion="css 96nxkt">.css-96nxkt{border-radius:0;margin:0 -2px;border-width:2px;}</style><style data-emotion="css 1mmqwql">.css-1mmqwql{-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;border:none;border-radius:4px;border-style:solid;border-width:2px;cursor:pointer;display:-webkit-inline-box;display:-webkit-inline-flex;display:-ms-inline-flexbox;display:inline-flex;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;position:relative;-webkit-text-decoration:none;text-decoration:none;text-transform:capitalize;-webkit-transition:0.15s;transition:0.15s;vertical-align:baseline;white-space:nowrap;background-color:transparent;border-color:#05192d;color:#05192d;padding:0 15px;border-radius:0;margin:0 -2px;border-width:2px;}.css-1mmqwql:active{-webkit-transform:perspective(1px) scale(0.975);-moz-transform:perspective(1px) scale(0.975);-ms-transform:perspective(1px) scale(0.975);transform:perspective(1px) scale(0.975);}.css-1mmqwql:disabled,.css-1mmqwql:hover:disabled,.css-1mmqwql:active:disabled{-webkit-transform:none;-moz-transform:none;-ms-transform:none;transform:none;}.css-1mmqwql:focus{outline:0;}.css-1mmqwql:hover{background-color:rgba(5, 25, 45, 0.15);border-color:#05192d;color:#05192d;}</style><button class="css-1mmqwql" type="button" data-cy="header-outline"><svg viewbox="0 0 18 18" aria-hidden="true" height="18" role="img" width="18"><path fill="currentColor" d="M4 6a1 1 0 110-2h10a1 1 0 010 2H4zm0 4a1 1 0 110-2h10a1 1 0 010 2H4zm0 4a1 1 0 010-2h10a1 1 0 010 2H4z" fill-rule="evenodd"/></svg><style data-emotion="css umljpx">.css-umljpx{font-size:16px;line-height:32px;color:#05192d;font-weight:bold;margin-left:8px;}</style><style data-emotion="css x8hx3d">.css-x8hx3d{-webkit-font-smoothing:antialiased;color:rgb(5, 25, 45);font-family:Studio-Feixen-Sans,Arial;font-style:normal;font-size:16px;font-weight:400;font-size:16px;line-height:32px;color:#05192d;font-weight:bold;margin-left:8px;}</style><span class="css-x8hx3d">Course Outline</span></button><style data-emotion="css q5k7z8">.css-q5k7z8{z-index:1;border-width:2px;border-radius:0px 4px 4px 0px;}</style><style data-emotion="css 4ww6px">.css-4ww6px{-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;border:none;border-radius:4px;border-style:solid;border-width:2px;cursor:pointer;display:-webkit-inline-box;display:-webkit-inline-flex;display:-ms-inline-flexbox;display:inline-flex;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;position:relative;-webkit-text-decoration:none;text-decoration:none;text-transform:capitalize;-webkit-transition:0.15s;transition:0.15s;vertical-align:baseline;white-space:nowrap;background-color:transparent;border-color:#05192d;color:#05192d;padding:8px;z-index:1;border-width:2px;border-radius:0px 4px 4px 0px;}.css-4ww6px:active{-webkit-transform:perspective(1px) scale(0.975);-moz-transform:perspective(1px) scale(0.975);-ms-transform:perspective(1px) scale(0.975);transform:perspective(1px) scale(0.975);}.css-4ww6px:disabled,.css-4ww6px:hover:disabled,.css-4ww6px:active:disabled{-webkit-transform:none;-moz-transform:none;-ms-transform:none;transform:none;}.css-4ww6px:focus{outline:0;}.css-4ww6px:hover{background-color:rgba(5, 25, 45, 0.15);border-color:#05192d;color:#05192d;}</style><a aria-label="Go to next exercise" class="css-4ww6px" href="/courses/intermediate-importing-data-in-python/importing-data-from-the-internet-1?ex=3" data-cy="header-next"><svg viewbox="0 0 18 18" aria-hidden="true" height="18" role="img" width="18"><path fill="currentColor" d="M13.58 10L2 10.002a1 1 0 010-2L13.59 8l-3.285-3.296a.998.998 0 111.417-1.41l4.93 4.948a.998.998 0 01-.012 1.527l-4.933 4.938a1 1 0 01-1.414 0 1.002 1.002 0 010-1.416l3.287-3.29z" fill-rule="evenodd"/></svg></a></nav></div></div><style data-emotion="css 1dskn3o">.css-1dskn3o{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex:1;-ms-flex:1;flex:1;-webkit-box-pack:end;-ms-flex-pack:end;-webkit-justify-content:flex-end;justify-content:flex-end;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}</style><nav class="css-1dskn3o"><div data-cy="header-session" class="dc-u-fx dc-u-fx-aic dc-u-mr-8"><svg viewbox="0 0 18 18" aria-hidden="false" height="18" role="img" width="18"><title>Session Ready</title><path fill="#03ef62" d="M9 18A9 9 0 119 0a9 9 0 010 18z" fill-rule="evenodd"/></svg></div><style data-emotion="css 1tjot0g">.css-1tjot0g{border:none;color:#05192d;}</style><style data-emotion="css k7j8uv">.css-k7j8uv{-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;border:none;border-radius:4px;border-style:solid;border-width:2px;cursor:pointer;display:-webkit-inline-box;display:-webkit-inline-flex;display:-ms-inline-flexbox;display:inline-flex;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;position:relative;-webkit-text-decoration:none;text-decoration:none;text-transform:capitalize;-webkit-transition:0.15s;transition:0.15s;vertical-align:baseline;white-space:nowrap;background-color:transparent;border-color:#05192d;color:#05192d;padding:8px;border:none;color:#05192d;}.css-k7j8uv:active{-webkit-transform:perspective(1px) scale(0.975);-moz-transform:perspective(1px) scale(0.975);-ms-transform:perspective(1px) scale(0.975);transform:perspective(1px) scale(0.975);}.css-k7j8uv:disabled,.css-k7j8uv:hover:disabled,.css-k7j8uv:active:disabled{-webkit-transform:none;-moz-transform:none;-ms-transform:none;transform:none;}.css-k7j8uv:focus{outline:0;}.css-k7j8uv:hover{background-color:rgba(5, 25, 45, 0.15);border-color:#05192d;color:#05192d;}</style><button aria-label="Show Video" class="css-k7j8uv" type="button" data-cy="header-video"><svg viewbox="0 0 18 18" aria-hidden="true" height="18" role="img" width="18"><path fill="currentColor" d="M13 6.3l3.331-2.998A1 1 0 0118 4.045v9.91a1 1 0 01-1.669.743L13 11.7V14c0 .552-.485 1-1.083 1H1.083C.485 15 0 14.552 0 14V4c0-.552.485-1 1.083-1h10.834C12.515 3 13 3.448 13 4v2.3zm0 2.69v.02l3 2.7V6.29l-3 2.7zM2 5v8h9V5H2z" fill-rule="evenodd"/></svg></button><button aria-label="Show Slides" class="css-k7j8uv" type="button" data-cy="header-slides"><svg viewbox="0 0 18 18" aria-hidden="true" height="18" role="img" width="18"><path fill="currentColor" d="M14 9.004H9.996a2 2 0 01-2-2V2H4v14h10V9.004zm1.828-2.815A1.938 1.938 0 0116 7v9a2 2 0 01-2 2H4a2 2 0 01-2-2V2a2 2 0 012-2h5.003a2 2 0 011.415.586l4.997 5a2 2 0 01.413.603zm-1.832.815l-4-4v4h4z" fill-rule="evenodd"/></svg></button><button aria-label="Provide Feedback" class="css-k7j8uv" type="button" data-cy="header-issue" data-test-id="header-report-issue-button"><svg viewbox="0 0 18 18" aria-hidden="true" height="18" role="img" width="18"><path fill="currentColor" d="M9 16A7 7 0 109 2a7 7 0 000 14zm0 2A9 9 0 119 0a9 9 0 010 18zm0-4a1 1 0 110-2 1 1 0 010 2zM8 5a1 1 0 112 0v5a1 1 0 01-2 0V5z" fill-rule="evenodd"/></svg></button></nav></header><main class="exercise-area"><div data-cy="server-side-loader-placeholder"><aside class="exercise--sidebar" style="width:40%"><div class="exercise--sidebar-content"><div class="listview__outer"><div class="listview__inner"><div class="listview__section"><div><div role="button" class="listview__header"><div class="exercise--sidebar-header"><h5 class="dc-panel__title"><svg aria-label="exercise icon" class="dc-icon-exercise dc-u-color-navy dc-u-mr-8" fill="currentColor" height="12" role="Img" width="12"><use xlink:href="/static/media/symbols.e369b265.svg#exercise"/></svg>Exercise</h5></div></div></div><div class="listview__content"><div class="exercise--assignment exercise--typography"><h1 class="exercise--title">Importing flat files from the web: your turn!</h1><div class><p>You are about to import your first file from the web! The flat file you will import will be <code>'winequality-red.csv'</code> from the University of California, Irvine's <a href="http://archive.ics.uci.edu/ml/index.html">Machine Learning repository</a>. The flat file contains tabular data of physiochemical properties of red wine, such as pH, alcohol content and citric acid content, along with wine quality rating.</p>\n<p>The URL of the file is</p>\n<pre><code>'https://s3.amazonaws.com/assets.datacamp.com/production/course_1606/datasets/winequality-red.csv'\n</code></pre>\n<p>After you import it, you'll check your working directory to confirm that it is there and then you'll load it into a <code>pandas</code> DataFrame.</p></div></div></div></div><div class="listview__section" style="min-height:calc(100% - 33px)"><div><div role="button" class="listview__header"><div class="exercise--sidebar-header"><h5 class="dc-panel__title"><svg aria-label="checkmark_circle icon" class="dc-icon-checkmark_circle dc-u-color-navy dc-u-mr-8" fill="currentColor" height="12" role="Img" width="12"><use xlink:href="/static/media/symbols.e369b265.svg#checkmark_circle"/></svg>Instructions</h5><style data-emotion="css ye8hc5">.css-ye8hc5{border-radius:4px;display:inline-block;text-transform:uppercase;background-color:#fcce0d;color:#05192d;font-size:14px;line-height:18px;padding-left:4px;padding-right:4px;}</style><style data-emotion="css 1qmgwl8">.css-1qmgwl8{-webkit-font-smoothing:antialiased;color:rgb(5, 25, 45);font-family:Studio-Feixen-Sans,Arial;font-style:normal;font-weight:800;line-height:16px;border-radius:4px;display:inline-block;text-transform:uppercase;background-color:#fcce0d;color:#05192d;font-size:14px;line-height:18px;padding-left:4px;padding-right:4px;}</style><strong class="css-1qmgwl8">100 XP</strong></div></div></div><div class="listview__content"><div><div class><div class="exercise--instructions exercise--typography"><div class="exercise--instructions__content"><ul>\n<li>Import the function <code>urlretrieve</code> from the subpackage <code>urllib.request</code>.</li>\n<li>Assign the URL of the file to the variable <code>url</code>.</li>\n<li>Use the function <code>urlretrieve()</code> to save the file locally as <code>'winequality-red.csv'</code>.</li>\n<li>Execute the remaining code to load <code>'winequality-red.csv'</code> in a pandas DataFrame and to print its head to the shell.</li>\n</ul></div><div style="margin:16px -15px 0"><section class="dc-sct-feedback" tabindex="-1"><div></div><nav class="dc-sct-feedback__nav"><ul class="dc-sct-feedback__tab-list"></ul><style data-emotion="css 1ms4xsv">.css-1ms4xsv{-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;border:none;border-radius:4px;border-style:solid;border-width:2px;cursor:pointer;display:-webkit-inline-box;display:-webkit-inline-flex;display:-ms-inline-flexbox;display:inline-flex;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;position:relative;-webkit-text-decoration:none;text-decoration:none;text-transform:capitalize;-webkit-transition:0.15s;transition:0.15s;vertical-align:baseline;white-space:nowrap;background-color:transparent;border-color:#05192d;color:#05192d;padding:0 15px;}.css-1ms4xsv:active{-webkit-transform:perspective(1px) scale(0.975);-moz-transform:perspective(1px) scale(0.975);-ms-transform:perspective(1px) scale(0.975);transform:perspective(1px) scale(0.975);}.css-1ms4xsv:disabled,.css-1ms4xsv:hover:disabled,.css-1ms4xsv:active:disabled{-webkit-transform:none;-moz-transform:none;-ms-transform:none;transform:none;}.css-1ms4xsv:focus{outline:0;}.css-1ms4xsv:hover{background-color:rgba(5, 25, 45, 0.15);border-color:#05192d;color:#05192d;}</style><button class="css-1ms4xsv" type="button" data-cy="exercise-show-hint"><svg viewbox="0 0 18 18" aria-hidden="true" height="18" role="img" width="18"><path fill="currentColor" d="M9 0a7 7 0 014.95 11.95l-.001-.001c-.794.795-.949 1.1-.949 2.051a1 1 0 01-2 0c0-1.548.396-2.325 1.535-3.467l.04-.037a5 5 0 10-7.11.037C6.605 11.675 7 12.453 7 14a1 1 0 01-2 0c0-.951-.155-1.256-.949-2.051A7 7 0 019 0zm0 7a1 1 0 011 1v6a1 1 0 01-2 0V8a1 1 0 011-1zm0 11c-1.657 0-3-.895-3-2h6c0 1.105-1.343 2-3 2z" fill-rule="evenodd"/></svg><style data-emotion="css umljpx">.css-umljpx{font-size:16px;line-height:32px;color:#05192d;font-weight:bold;margin-left:8px;}</style><style data-emotion="css x8hx3d">.css-x8hx3d{-webkit-font-smoothing:antialiased;color:rgb(5, 25, 45);font-family:Studio-Feixen-Sans,Arial;font-style:normal;font-size:16px;font-weight:400;font-size:16px;line-height:32px;color:#05192d;font-weight:bold;margin-left:8px;}</style><span class="css-x8hx3d">Take Hint (-30 XP)</span></button></nav></section></div></div></div></div></div></div></div></div></div></aside><section class="exercise--content" style="width:60%"><div class="exercise-waiting"><div class="global-spinner dc-u-fx-jcc dc-u-fx"><style data-emotion="css 1eqmq6c">.css-1eqmq6c{width:70px;}</style><div class="css-1eqmq6c"></div></div><noscript></noscript></div></section></div><style data-emotion="css dhfy3a 7y8jxc 14w24v3 1yuhvjn">.css-dhfy3a{-webkit-animation-name:animation-2ijyvo;animation-name:animation-2ijyvo;-webkit-animation-timing-function:cubic-bezier(0.23, 1, 0.32, 1);animation-timing-function:cubic-bezier(0.23, 1, 0.32, 1);}@-webkit-keyframes animation-2ijyvo{50%{opacity:1;}from{opacity:0;-webkit-transform:scale3d(0.3, 0.3, 0.3);-moz-transform:scale3d(0.3, 0.3, 0.3);-ms-transform:scale3d(0.3, 0.3, 0.3);transform:scale3d(0.3, 0.3, 0.3);}}@keyframes animation-2ijyvo{50%{opacity:1;}from{opacity:0;-webkit-transform:scale3d(0.3, 0.3, 0.3);-moz-transform:scale3d(0.3, 0.3, 0.3);-ms-transform:scale3d(0.3, 0.3, 0.3);transform:scale3d(0.3, 0.3, 0.3);}}.css-7y8jxc{-webkit-animation-name:animation-1phn0oq;animation-name:animation-1phn0oq;-webkit-animation-timing-function:cubic-bezier(0.755, 0.05, 0.855, 0.06);animation-timing-function:cubic-bezier(0.755, 0.05, 0.855, 0.06);}@-webkit-keyframes animation-1phn0oq{50%{opacity:0;-webkit-transform:scale3d(0.3, 0.3, 0.3);-moz-transform:scale3d(0.3, 0.3, 0.3);-ms-transform:scale3d(0.3, 0.3, 0.3);transform:scale3d(0.3, 0.3, 0.3);}from{opacity:1;}to{opacity:0;}}@keyframes animation-1phn0oq{50%{opacity:0;-webkit-transform:scale3d(0.3, 0.3, 0.3);-moz-transform:scale3d(0.3, 0.3, 0.3);-ms-transform:scale3d(0.3, 0.3, 0.3);transform:scale3d(0.3, 0.3, 0.3);}from{opacity:1;}to{opacity:0;}}.css-14w24v3{left:50%;position:fixed;top:0;-webkit-transform:translateX(-50%);-moz-transform:translateX(-50%);-ms-transform:translateX(-50%);transform:translateX(-50%);z-index:999;}.css-14w24v3 .Toastify__progress-bar{-webkit-animation:animation-qqoh2i linear 1;animation:animation-qqoh2i linear 1;}@-webkit-keyframes animation-qqoh2i{0%{-webkit-transform:scaleX(1);-moz-transform:scaleX(1);-ms-transform:scaleX(1);transform:scaleX(1);}100%{-webkit-transform:scaleX(0);-moz-transform:scaleX(0);-ms-transform:scaleX(0);transform:scaleX(0);}}@keyframes animation-qqoh2i{0%{-webkit-transform:scaleX(1);-moz-transform:scaleX(1);-ms-transform:scaleX(1);transform:scaleX(1);}100%{-webkit-transform:scaleX(0);-moz-transform:scaleX(0);-ms-transform:scaleX(0);transform:scaleX(0);}}.css-1yuhvjn{margin-top:16px;}</style><div class="Toastify"></div></main><div class="exercise-footer"><ul data-cy="progress-container" class="dc-progress-indicator"><li class="dc-progress-indicator__item"><a href="javascript:void(0)" class="dc-progress-indicator__bar"><div class="dc-progress-indicator__fill" style="width:0%"></div></a></li><li class="dc-progress-indicator__item"><a href="javascript:void(0)" class="dc-progress-indicator__bar"><div class="dc-progress-indicator__fill" style="width:0%"></div></a></li><li class="dc-progress-indicator__item"><a href="javascript:void(0)" class="dc-progress-indicator__bar"><div class="dc-progress-indicator__fill" style="width:0%"></div></a></li></ul></div><style data-emotion="css zs9gal 13qqqtf 728dx5 1d9ftqx atcdtd 728dx5 d3v9zr">.css-zs9gal{opacity:1!important;-webkit-transform:none!important;-moz-transform:none!important;-ms-transform:none!important;transform:none!important;}.css-13qqqtf{-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:column;-ms-flex-direction:column;flex-direction:column;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;min-width:0;opacity:0;outline:none;position:relative;-webkit-transform:scale(0.5);-moz-transform:scale(0.5);-ms-transform:scale(0.5);transform:scale(0.5);-webkit-transition:0.4s cubic-bezier(0.19, 1, 0.22, 1);transition:0.4s cubic-bezier(0.19, 1, 0.22, 1);box-sizing:border-box;max-height:100%;padding:8px;width:496px;}.css-728dx5{opacity:0!important;}.css-1d9ftqx{opacity:1!important;}.css-atcdtd{-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;background-color:rgba(5, 25, 45, 0.8);bottom:0;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;left:0;opacity:0;position:fixed;right:0;top:0;-webkit-transition:opacity 0.6s cubic-bezier(0.19, 1, 0.22, 1);transition:opacity 0.6s cubic-bezier(0.19, 1, 0.22, 1);z-index:900;}.css-d3v9zr{overflow:hidden;}</style></div></div><script>window.MathJax={options:{ignoreHtmlClass:"tex2jax_ignore",processHtmlClass:"tex2jax_process"},tex:{autoload:{color:[],colorV2:["color"]},packages:{"[+]":["noerrors"]}},loader:{load:["[tex]/noerrors"]}}</script><script src="/mathjax@3/es5/tex-chtml.js" id="MathJax-script"></script><script src="/static/js/main.1826b417.js"></script><script type="text/javascript">(function(){window[\'__CF$cv$params\']={r:\'63bce5bf3cc309dc\',m:\'b452418c5a18fa911d74a740398aa9f2f7b0ccd1-1617731834-1800-AZmJAkanMrQXZHZ1EUiM1+1B8q5CqNtto0aHuVQac5RQIVtxATwvAh4K/u6KDpFiK6FpPDmyx5spxziICEQ5lF0lkYPeRP+ojojiTz/Owk7nCIQzh1WDaknc5TdS33d/Zch5EnVvqjgxK+73Vumaf80=\',s:[0xe6eaa96af6,0x67ed0a6de7],}})();</script></body></html>'
###Markdown
Performing HTTP requests in Python using requestsNow that you've got your head and hands around making HTTP requests using the urllib package, you're going to figure out how to do the same using the higher-level requests library. You'll once again be pinging DataCamp servers for their `"http://www.datacamp.com/teach/documentation"` page.Note that unlike in the previous exercises using urllib, you don't have to close the connection when using requests!Instructions- Import the package `requests`.- Assign the URL of interest to the variable `url`.- Package the request to the URL, send the request and catch the response with a single function `requests.get()`, assigning the response to the variable `r`.- Use the `text` attribute of the object `r` to return the HTML of the webpage as a string; store the result in a variable `text`.- Print the HTML of the webpage.
###Code
# Import package
import requests
# Specify the url: url
url = "http://www.datacamp.com/teach/documentation"
# Packages the request, send the request and catch the response: r
r = requests.get(url)
# Extract the response: text
text = r.text
# Print the html
print(text)
###Output
<!DOCTYPE html>
<!--[if lt IE 7]> <html class="no-js ie6 oldie" lang="en-US"> <![endif]-->
<!--[if IE 7]> <html class="no-js ie7 oldie" lang="en-US"> <![endif]-->
<!--[if IE 8]> <html class="no-js ie8 oldie" lang="en-US"> <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js" lang="en-US"> <!--<![endif]-->
<head>
<title>Attention Required! | Cloudflare</title>
<meta name="captcha-bypass" id="captcha-bypass" />
<meta charset="UTF-8" />
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<meta http-equiv="X-UA-Compatible" content="IE=Edge,chrome=1" />
<meta name="robots" content="noindex, nofollow" />
<meta name="viewport" content="width=device-width,initial-scale=1" />
<link rel="stylesheet" id="cf_styles-css" href="/cdn-cgi/styles/cf.errors.css" type="text/css" media="screen,projection" />
<!--[if lt IE 9]><link rel="stylesheet" id='cf_styles-ie-css' href="/cdn-cgi/styles/cf.errors.ie.css" type="text/css" media="screen,projection" /><![endif]-->
<style type="text/css">body{margin:0;padding:0}</style>
<!--[if gte IE 10]><!-->
<script>
if (!navigator.cookieEnabled) {
window.addEventListener('DOMContentLoaded', function () {
var cookieEl = document.getElementById('cookie-alert');
cookieEl.style.display = 'block';
})
}
</script>
<!--<![endif]-->
<script type="text/javascript">
//<![CDATA[
(function(){
window._cf_chl_opt={
cvId: "2",
cType: "interactive",
cNounce: "76769",
cRay: "63bcf46adad1f85b",
cHash: "7289b1cc5170183",
cFPWv: "g",
cTTimeMs: "4000",
cLt: "n",
cRq: {
ru: "aHR0cDovL3d3dy5kYXRhY2FtcC5jb20vdGVhY2gvZG9jdW1lbnRhdGlvbg==",
ra: "cHl0aG9uLXJlcXVlc3RzLzIuMjIuMA==",
rm: "R0VU",
d: "XX3I+KfcVUS7jO8+4ZCfi0beX2DrtoG6ZDPAc3CZ7aJ7luaxz2Zc6laWB8Gc9l7LAisUwZixTqThD+J8/VgHa1lBymdrxijezQuL97L4yI9BQvfedWUwYCW99DY3zWwqPbG6gFEc4+QyPac1q3Ggcyepy+KMjP1cIW5WfzBHRIGJVlliTjbmomAJnenwKnOLycJevjMQzWB2eQs0e5UTAB9HR1LbIjGUoc7Vso3otLUsQshvyU9VyyFy7D7yn/ql74jTD5kqovkNpcLwnUBtKOkCU19vv3v3MTyzgYzle+YZ2wytaBLFoSo9me7YHljX+is3HPrkjczk4tr9d8viER3nJugCj0U2wmM9qMZkfhl/Xjve+atFMV5HnRlkFJTJPAqVkBFiHvyhsxcbXVvCV0DYgBnTh7R1A+ZsmGOOUOd6kwFBzawG/UH66Ol9oxuFi4WyBanev/JJ/kLF0qyw+0MgkOd19KJZFhQesOWgJv9+PThNP7jGCJb74o817wHEJDybn4Ih/8TGLSbjouVfj3eLsA/mLO6689I+qhm98BodZNf7bj0ul+r8DTkpEqaDxDc3l2NA4yY2xRUyCLd6DduXCgBCHErw0PClVOK+SJxx8lcUu4PETX7nbqM7EHU+41iE0C5FevSeBmxrjrIaRNgKHcHQ4HZuqhxYeMc7vuZBSW0ZGtWO8OBnm2xBg/vv+TUXIqxYkIqKTTCpYluMJigrjonJ9h6qTOlkyhk+R259YiCYbwDDqPDohvaMNRmKqkBuyzJCuznNepAS9IS/GmUNL916TbbFsqv9GYP6olD0bapetDHZ4HJMAd1JAeR90zX+cOV/VZsZeEKgBAxANcj3OewTyKvK/fZZWFPLxrIxRtPMeTq6CamEYSC/jzJZLYYONl4UTFVdpUO5J2QauQ==",
t: "MTYxNzczMjQzNS42NzQwMDA=",
m: "Hy8h7rd0hP/vkKdMCw5LqQEWxtV/qKm6jrp53/yGze0=",
i1: "P0y8aEXkoGUrdvlL+Klb6g==",
i2: "/xcwZZzy9M8SzYRAhQrtCA==",
uh: "JnPNhFrP9JDZz++jrWFNK99fEBZafo8DSm+TpH36hUY=",
hh: "rAZnIHiyrNuZ60h9aAZNML8izDilqmOSNuCtac1WqPs=",
}
};
}());
//]]>
</script>
<style type="text/css">
#cf-wrapper #spinner {width:69px; margin: auto;}
#cf-wrapper #cf-please-wait{text-align:center}
.attribution {margin-top: 32px;}
.bubbles { background-color: #f58220; width:20px; height: 20px; margin:2px; border-radius:100%; display:inline-block; }
#cf-wrapper #challenge-form { padding-top:25px; padding-bottom:25px; }
#cf-hcaptcha-container { text-align:center;}
#cf-hcaptcha-container iframe { display: inline-block;}
@keyframes fader { 0% {opacity: 0.2;} 50% {opacity: 1.0;} 100% {opacity: 0.2;} }
#cf-wrapper #cf-bubbles { width:69px; }
@-webkit-keyframes fader { 0% {opacity: 0.2;} 50% {opacity: 1.0;} 100% {opacity: 0.2;} }
#cf-bubbles > .bubbles { animation: fader 1.6s infinite;}
#cf-bubbles > .bubbles:nth-child(2) { animation-delay: .2s;}
#cf-bubbles > .bubbles:nth-child(3) { animation-delay: .4s;}
</style>
</head>
<body>
<div id="cf-wrapper">
<div class="cf-alert cf-alert-error cf-cookie-error" id="cookie-alert" data-translate="enable_cookies">Please enable cookies.</div>
<div id="cf-error-details" class="cf-error-details-wrapper">
<div class="cf-wrapper cf-header cf-error-overview">
<h1 data-translate="challenge_headline">One more step</h1>
<h2 class="cf-subheadline"><span data-translate="complete_sec_check">Please complete the security check to access</span> www.datacamp.com</h2>
</div>
<div class="cf-section cf-highlight cf-captcha-container">
<div class="cf-wrapper">
<div class="cf-columns two">
<div class="cf-column">
<div class="cf-highlight-inverse cf-form-stacked">
<form class="challenge-form interactive-form" id="challenge-form" action="/teach/documentation?__cf_chl_captcha_tk__=5e607b13fd15e7ff6afac0618f7b8453b4d9fc00-1617732435-0-AQuMrR-eu0LgCnc0nzQiNd1V1pqj3_X03byUOnUOga8_y1wWbxjYlT3qOa1IdbhpydTJSwaCGrDPbz93uGk9SJjWj1NiMhVbFIUPxRtBrH0O_8GrCOl-3g30Y-7nrenkjswqPrOsjgfwuavwARwoOix_SovtEJjYDdtDzEH1jpg7BDbI6jSKLm3ZZGn-LV4brWtysfX6QWFkCv4KNdySm4BJohFJXcAyqmeBpuZWAeWxKrqwSVOAeGXrkm-ezDLa7mJio02ZE9yal5MtED_mIymjWVJrEs-Muhe2ziLZ5lJDLzxDwxIK5PZz97-TLbreC7dsfijcO-xSsmwTofdFpzdJGLai36ahKjEfLXWm0o0VIr5Zn7ebrtFg7CogECXMwvSRWA7tTXnA4KszDX_yfQ_ZDXvF4uOk72BCVNoAPg6aepFZkDW6CDi2tdjcdVoT_cwoN-A40RZsE4_2826xKvWjdzC8qrUsJTLpN-UYbr0_hvRpfDBJb8KderJUmujG2n1ZUu-RUvzKy448ZF1t_ybUcCTyRj8nS2duVQHtzCEo9nPfgRW6bKrtb9X929rkQnWOE6LG8M8SeGk4VuSvofQ3GpCn6Vpl-Y5o6zLVd-ZPpUh6WTBBksrVPHzgseR8Pzqk_XyXsigS3Q_qEbmcImZnOH8zhnQlJh3y_s4L-rConr8mkPRQuoz4QP2VQ_GUvQ" method="POST" enctype="application/x-www-form-urlencoded">
<div id='cf-please-wait'>
<div id='spinner'>
<div id="cf-bubbles">
<div class="bubbles"></div>
<div class="bubbles"></div>
<div class="bubbles"></div>
</div>
</div>
<p data-translate="please_wait" id="cf-spinner-please-wait">Please stand by, while we are checking your browser...</p>
<p data-translate="redirecting" id="cf-spinner-redirecting" style="display:none">Redirecting...</p>
</div>
<input type="hidden" name="r" value="5a894e825ac29fc261840b9acb389e3d689cb42a-1617732435-0-AaMwjRia0xfPpodn12dDdURW8YLDftbTyWs0JzG4MiJ8ajowP9/FxlmI02p/XQ3OIgPpHmwklnB6vreWAb+096mdTx3heHnnU+GufGr8p6va+KmHI+iktLFPeO3qn4jQowXWB8ho4ESZx5VsL6Rmglc+xi5xzuSSSozde4hvv+4ILbL/hsbumBg+i95/2dZYtYGVmxF5dxyuJqI1bI1/LS1ibvgowx8mdlgbPnMVyizSk2z0+Gtb+nurI7dcud1aB9EqEome2z+LtOMqTxKggE4VaQIlMTp5RSOXc8wpwGf1j64AsN+NrdgMTlYcyTuvo2hiapul2pcNm4c/3ulhtDYfRNOvj3s0E3/lD+aCh7macXAIFFtsFsqZmShS1KkiJE5vcVbQ71mrGQNwgHxhIAmx7GyKxpaFHJHIcrrW06HrUa4Snm+hiiKfwjC7NQdb7/EZEPtVIogbyJM1w/BXa3ErRdHPZnjPAheaFIrAt/xkg9L8iTDGrHh1aqK7yghSeuHJ20JLd5UZYRRHU+VGknfC8GQauHYijbrCuMDPLqI+q2VTmL7RXpPqQdazKl8AhA==">
<input type="hidden" name="cf_captcha_kind" value="h">
<input type="hidden" name="vc" value="ed15498576f1947ba32eee53833faef4">
<noscript id="cf-captcha-bookmark" class="cf-captcha-info">
<h1 data-translate="turn_on_js" style="color:#bd2426;">Please turn JavaScript on and reload the page.</h1>
</noscript>
<div id="no-cookie-warning" class="cookie-warning" data-translate="turn_on_cookies" style="display:none">
<p data-translate="turn_on_cookies" style="color:#bd2426;">Please enable Cookies and reload the page.</p>
</div>
<script type="text/javascript">
//<![CDATA[
var a = function() {try{return !!window.addEventListener} catch(e) {return !1} },
b = function(b, c) {a() ? document.addEventListener("DOMContentLoaded", b, c) : document.attachEvent("onreadystatechange", b)};
b(function(){
var cookiesEnabled=(navigator.cookieEnabled)? true : false;
if(!cookiesEnabled){
var q = document.getElementById('no-cookie-warning');q.style.display = 'block';
}
});
//]]>
</script>
<div id="trk_captcha_js" style="background-image:url('/cdn-cgi/images/trace/captcha/nojs/h/transparent.gif?ray=63bcf46adad1f85b')"></div>
</form>
<script type="text/javascript">
//<![CDATA[
(function(){
var isIE = /(MSIE|Trident\/|Edge\/)/i.test(window.navigator.userAgent);
var trkjs = isIE ? new Image() : document.createElement('img');
trkjs.setAttribute("src", "/cdn-cgi/images/trace/captcha/js/transparent.gif?ray=63bcf46adad1f85b");
trkjs.id = "trk_captcha_js";
trkjs.setAttribute("alt", "");
document.body.appendChild(trkjs);
var cpo=document.createElement('script');
cpo.type='text/javascript';
cpo.src="/cdn-cgi/challenge-platform/h/g/orchestrate/captcha/v1?ray=63bcf46adad1f85b";
document.getElementsByTagName('head')[0].appendChild(cpo);
}());
//]]>
</script>
</div>
</div>
<div class="cf-column">
<div class="cf-screenshot-container">
<span class="cf-no-screenshot"></span>
</div>
</div>
</div>
</div>
</div>
<div class="cf-section cf-wrapper">
<div class="cf-columns two">
<div class="cf-column">
<h2 data-translate="why_captcha_headline">Why do I have to complete a CAPTCHA?</h2>
<p data-translate="why_captcha_detail">Completing the CAPTCHA proves you are a human and gives you temporary access to the web property.</p>
</div>
<div class="cf-column">
<h2 data-translate="resolve_captcha_headline">What can I do to prevent this in the future?</h2>
<p data-translate="resolve_captcha_antivirus">If you are on a personal connection, like at home, you can run an anti-virus scan on your device to make sure it is not infected with malware.</p>
<p data-translate="resolve_captcha_network">If you are at an office or shared network, you can ask the network administrator to run a scan across the network looking for misconfigured or infected devices.</p>
</div>
</div>
</div>
<div class="cf-error-footer cf-wrapper w-240 lg:w-full py-10 sm:py-4 sm:px-8 mx-auto text-center sm:text-left border-solid border-0 border-t border-gray-300">
<p class="text-13">
<span class="cf-footer-item sm:block sm:mb-1">Cloudflare Ray ID: <strong class="font-semibold">63bcf46adad1f85b</strong></span>
<span class="cf-footer-separator sm:hidden">•</span>
<span class="cf-footer-item sm:block sm:mb-1"><span>Your IP</span>: 177.34.108.76</span>
<span class="cf-footer-separator sm:hidden">•</span>
<span class="cf-footer-item sm:block sm:mb-1"><span>Performance & security by</span> <a rel="noopener noreferrer" href="https://www.cloudflare.com/5xx-error-landing" id="brand_link" target="_blank">Cloudflare</a></span>
</p>
</div><!-- /.error-footer -->
</div>
</div>
<script type="text/javascript">
window._cf_translation = {};
</script>
</body>
</html>
###Markdown
Parsing HTML with BeautifulSoupIn this interactive exercise, you'll learn how to use the BeautifulSoup package to _parse_, _prettify_ and _extract_ information from HTML. You'll scrape the data from the webpage of Guido van Rossum, Python's very own [Benevolent Dictator for Life](https://en.wikipedia.org/wiki/Benevolent_dictator_for_life). In the following exercises, you'll prettify the HTML and then extract the text and the hyperlinks.The URL of interest is `url = 'https://www.python.org/~guido/'`.Instructions- Import the function `BeautifulSoup` from the package `bs4`.- Assign the URL of interest to the variable `url`.- Package the request to the URL, send the request and catch the response with a single function `requests.get()`, assigning the response to the variable `r`.- Use the `text` attribute of the object `r` to return the HTML of the webpage as a string; store the result in a variable `html_doc`.- Create a BeautifulSoup object `soup` from the resulting HTML using the function `BeautifulSoup()`.- Use the method `prettify()` on `soup` and assign the result to `pretty_soup`.- Print the prettified HTML!
###Code
# Import packages
import requests
from bs4 import BeautifulSoup
# Specify url: url
url = 'https://www.python.org/~guido/'
# Package the request, send the request and catch the response: r
r = requests.get(url)
# Extracts the response as html: html_doc
html_doc = r.text
# Create a BeautifulSoup object from the HTML: soup
soup = BeautifulSoup(html_doc)
# Prettify the BeautifulSoup object: pretty_soup
pretty_soup = soup.prettify()
# Print the response
print(pretty_soup)
###Output
<html>
<head>
<title>
Guido's Personal Home Page
</title>
</head>
<body bgcolor="#FFFFFF" text="#000000">
<!-- Built from main -->
<h1>
<a href="pics.html">
<img border="0" src="images/IMG_2192.jpg"/>
</a>
Guido van Rossum - Personal Home Page
<a href="pics.html">
<img border="0" height="216" src="images/guido-headshot-2019.jpg" width="270"/>
</a>
</h1>
<p>
<a href="http://www.washingtonpost.com/wp-srv/business/longterm/microsoft/stories/1998/raymond120398.htm">
<i>
"Gawky and proud of it."
</i>
</a>
</p>
<h3>
<a href="images/df20000406.jpg">
Who I Am
</a>
</h3>
<p>
Read
my
<a href="http://neopythonic.blogspot.com/2016/04/kings-day-speech.html">
"King's
Day Speech"
</a>
for some inspiration.
</p>
<p>
I am the author of the
<a href="http://www.python.org">
Python
</a>
programming language. See also my
<a href="Resume.html">
resume
</a>
and my
<a href="Publications.html">
publications list
</a>
, a
<a href="bio.html">
brief bio
</a>
, assorted
<a href="http://legacy.python.org/doc/essays/">
writings
</a>
,
<a href="http://legacy.python.org/doc/essays/ppt/">
presentations
</a>
and
<a href="interviews.html">
interviews
</a>
(all about Python), some
<a href="pics.html">
pictures of me
</a>
,
<a href="http://neopythonic.blogspot.com">
my new blog
</a>
, and
my
<a href="http://www.artima.com/weblogs/index.jsp?blogger=12088">
old
blog
</a>
on Artima.com. I am
<a href="https://twitter.com/gvanrossum">
@gvanrossum
</a>
on Twitter.
</p>
<p>
I am retired, working on personal projects (and maybe a book).
I have worked for Dropbox, Google, Elemental Security, Zope
Corporation, BeOpen.com, CNRI, CWI, and SARA. (See
my
<a href="Resume.html">
resume
</a>
.) I created Python while at CWI.
</p>
<h3>
How to Reach Me
</h3>
<p>
You can send email for me to guido (at) python.org.
I read everything sent there, but I receive too much email to respond
to everything.
</p>
<h3>
My Name
</h3>
<p>
My name often poses difficulties for Americans.
</p>
<p>
<b>
Pronunciation:
</b>
in Dutch, the "G" in Guido is a hard G,
pronounced roughly like the "ch" in Scottish "loch". (Listen to the
<a href="guido.au">
sound clip
</a>
.) However, if you're
American, you may also pronounce it as the Italian "Guido". I'm not
too worried about the associations with mob assassins that some people
have. :-)
</p>
<p>
<b>
Spelling:
</b>
my last name is two words, and I'd like to keep it
that way, the spelling on some of my credit cards notwithstanding.
Dutch spelling rules dictate that when used in combination with my
first name, "van" is not capitalized: "Guido van Rossum". But when my
last name is used alone to refer to me, it is capitalized, for
example: "As usual, Van Rossum was right."
</p>
<p>
<b>
Alphabetization:
</b>
in America, I show up in the alphabet under
"V". But in Europe, I show up under "R". And some of my friends put
me under "G" in their address book...
</p>
<h3>
More Hyperlinks
</h3>
<ul>
<li>
Here's a collection of
<a href="http://legacy.python.org/doc/essays/">
essays
</a>
relating to Python
that I've written, including the foreword I wrote for Mark Lutz' book
"Programming Python".
<p>
</p>
</li>
<li>
I own the official
<a href="images/license.jpg">
<img align="center" border="0" height="75" src="images/license_thumb.jpg" width="100"/>
Python license.
</a>
<p>
</p>
</li>
</ul>
<h3>
The Audio File Formats FAQ
</h3>
<p>
I was the original creator and maintainer of the Audio File Formats
FAQ. It is now maintained by Chris Bagwell
at
<a href="http://www.cnpbagwell.com/audio-faq">
http://www.cnpbagwell.com/audio-faq
</a>
. And here is a link to
<a href="http://sox.sourceforge.net/">
SOX
</a>
, to which I contributed
some early code.
</p>
<hr/>
<a href="images/internetdog.gif">
"On the Internet, nobody knows you're
a dog."
</a>
<hr/>
</body>
</html>
###Markdown
Turning a webpage into data using BeautifulSoup: getting the textAs promised, in the following exercises, you'll learn the basics of extracting information from HTML soup. In this exercise, you'll figure out how to extract the text from the BDFL's webpage, along with printing the webpage's title.Instructions- In the sample code, the HTML response object `html_doc` has already been created: your first task is to Soupify it using the function `BeautifulSoup()` and to assign the resulting soup to the variable `soup`.- Extract the title from the HTML soup `soup` using the attribute `title` and assign the result to `guido_title`.- Print the title of Guido's webpage using the `print()` function.- Extract the text from the HTML soup `soup` using the method `get_text()` and assign to `guido_text`.- Print the text from Guido's webpage.
###Code
# Import packages
import requests
from bs4 import BeautifulSoup
# Specify url: url
url = 'https://www.python.org/~guido/'
# Package the request, send the request and catch the response: r
r = requests.get(url)
# Extract the response as html: html_doc
html_doc = r.text
# Create a BeautifulSoup object from the HTML: soup
soup = BeautifulSoup(html_doc)
# Get the title of Guido's webpage: guido_title
guido_title = soup.title
# Print the title of Guido's webpage to the shell
print(guido_title)
# Get Guido's text: guido_text
guido_text = soup.text
# Print Guido's text to the shell
print(guido_text)
###Output
<title>Guido's Personal Home Page</title>
Guido's Personal Home Page
Guido van Rossum - Personal Home Page
"Gawky and proud of it."
Who I Am
Read
my "King's
Day Speech" for some inspiration.
I am the author of the Python
programming language. See also my resume
and my publications list, a brief bio, assorted writings, presentations and interviews (all about Python), some
pictures of me,
my new blog, and
my old
blog on Artima.com. I am
@gvanrossum on Twitter.
I am retired, working on personal projects (and maybe a book).
I have worked for Dropbox, Google, Elemental Security, Zope
Corporation, BeOpen.com, CNRI, CWI, and SARA. (See
my resume.) I created Python while at CWI.
How to Reach Me
You can send email for me to guido (at) python.org.
I read everything sent there, but I receive too much email to respond
to everything.
My Name
My name often poses difficulties for Americans.
Pronunciation: in Dutch, the "G" in Guido is a hard G,
pronounced roughly like the "ch" in Scottish "loch". (Listen to the
sound clip.) However, if you're
American, you may also pronounce it as the Italian "Guido". I'm not
too worried about the associations with mob assassins that some people
have. :-)
Spelling: my last name is two words, and I'd like to keep it
that way, the spelling on some of my credit cards notwithstanding.
Dutch spelling rules dictate that when used in combination with my
first name, "van" is not capitalized: "Guido van Rossum". But when my
last name is used alone to refer to me, it is capitalized, for
example: "As usual, Van Rossum was right."
Alphabetization: in America, I show up in the alphabet under
"V". But in Europe, I show up under "R". And some of my friends put
me under "G" in their address book...
More Hyperlinks
Here's a collection of essays relating to Python
that I've written, including the foreword I wrote for Mark Lutz' book
"Programming Python".
I own the official
Python license.
The Audio File Formats FAQ
I was the original creator and maintainer of the Audio File Formats
FAQ. It is now maintained by Chris Bagwell
at http://www.cnpbagwell.com/audio-faq. And here is a link to
SOX, to which I contributed
some early code.
"On the Internet, nobody knows you're
a dog."
###Markdown
Turning a webpage into data using BeautifulSoup: getting the hyperlinksIn this exercise, you'll figure out how to extract the URLs of the hyperlinks from the BDFL's webpage. In the process, you'll become close friends with the soup method `find_all()`.Instructions- Use the method `find_all()` to find all hyperlinks in `soup`, remembering that hyperlinks are defined by the HTML tag `` but passed to `find_all()` without angle brackets; store the result in the variable `a_tags`.- The variable `a_tags` is a results set: your job now is to enumerate over it, using a `for` loop and to print the actual URLs of the hyperlinks; to do this, for every element `link` in `a_tags`, you want to `print()` `link.get('href')`.
###Code
# Import packages
import requests
from bs4 import BeautifulSoup
# Specify url
url = 'https://www.python.org/~guido/'
# Package the request, send the request and catch the response: r
r = requests.get(url)
# Extracts the response as html: html_doc
html_doc = r.text
# create a BeautifulSoup object from the HTML: soup
soup = BeautifulSoup(html_doc)
# Print the title of Guido's webpage
print(soup.title)
# Find all 'a' tags (which define hyperlinks): a_tags
a_tags = soup.find_all('a')
# Print the URLs to the shell
for link in a_tags:
print(link.get('href'))
###Output
<title>Guido's Personal Home Page</title>
pics.html
pics.html
http://www.washingtonpost.com/wp-srv/business/longterm/microsoft/stories/1998/raymond120398.htm
images/df20000406.jpg
http://neopythonic.blogspot.com/2016/04/kings-day-speech.html
http://www.python.org
Resume.html
Publications.html
bio.html
http://legacy.python.org/doc/essays/
http://legacy.python.org/doc/essays/ppt/
interviews.html
pics.html
http://neopythonic.blogspot.com
http://www.artima.com/weblogs/index.jsp?blogger=12088
https://twitter.com/gvanrossum
Resume.html
guido.au
http://legacy.python.org/doc/essays/
images/license.jpg
http://www.cnpbagwell.com/audio-faq
http://sox.sourceforge.net/
images/internetdog.gif
|
imdb-tensorflow.ipynb | ###Markdown
###Code
import tensorflow as tf
print(tf.__version__)
import tensorflow_datasets as tfds
imdb, info = tfds.load("imdb_reviews", with_info = True, as_supervised = True)
import numpy as np
train_data, test_data = imdb['train'], imdb['test']
training_sentences = []
training_labels = []
testing_sentences = []
testing_labels = []
for s, l in train_data:
training_sentences.append(str(s.numpy()))
training_labels.append(l.numpy())
for s, l in test_data:
testing_sentences.append(str(s.numpy()))
testing_labels.append(l.numpy())
#hyperparameters
vocab_size = 10000
embedding_dim = 16
max_length = 120
trunc_type = 'post'
oov_tok = '<OOV>'
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
tokenizer = Tokenizer(num_words = vocab_size, oov_token = oov_tok)
tokenizer.fit_on_texts(training_sentences)
word_index = tokenizer.word_index
sequences = tokenizer.texts_to_sequences(training_sentences)
padded = pad_sequences(sequences, maxlen = max_length, truncating = trunc_type)
testing_sequences = tokenizer.texts_to_sequences(testing_sentences)
testing_padded = pad_sequences(testing_sequences, maxlen = max_length)
#model
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length = max_length), #key to text sentiment analysis
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(6, activation = 'relu'),
tf.keras.layers.Dense(1, activation = 'sigmoid')
])
model.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding (Embedding) (None, 120, 16) 160000
flatten (Flatten) (None, 1920) 0
dense (Dense) (None, 6) 11526
dense_1 (Dense) (None, 1) 7
=================================================================
Total params: 171,533
Trainable params: 171,533
Non-trainable params: 0
_________________________________________________________________
|
scikit-learn/plot_cluster_iris.ipynb | ###Markdown
K-means ClusteringThe plots display firstly what a K-means algorithm would yieldusing three clusters. It is then shown what the effect of a badinitialization is on the classification process:By setting n_init to only 1 (default is 10), the amount oftimes that the algorithm will be run with different centroidseeds is reduced.The next plot displays what using eight clusters would deliverand finally the ground truth.
###Code
print(__doc__)
# Code source: Gaël Varoquaux
# Modified for documentation by Jaques Grobler
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
# Though the following import is not directly being used, it is required
# for 3D projection to work
from mpl_toolkits.mplot3d import Axes3D
from sklearn.cluster import KMeans
from sklearn import datasets
np.random.seed(5)
iris = datasets.load_iris()
X = iris.data
y = iris.target
estimators = [('k_means_iris_8', KMeans(n_clusters=8)),
('k_means_iris_3', KMeans(n_clusters=3)),
('k_means_iris_bad_init', KMeans(n_clusters=3, n_init=1,
init='random'))]
fignum = 1
titles = ['8 clusters', '3 clusters', '3 clusters, bad initialization']
for name, est in estimators:
fig = plt.figure(fignum, figsize=(4, 3))
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=48, azim=134)
est.fit(X)
labels = est.labels_
ax.scatter(X[:, 3], X[:, 0], X[:, 2],
c=labels.astype(np.float), edgecolor='k')
ax.w_xaxis.set_ticklabels([])
ax.w_yaxis.set_ticklabels([])
ax.w_zaxis.set_ticklabels([])
ax.set_xlabel('Petal width')
ax.set_ylabel('Sepal length')
ax.set_zlabel('Petal length')
ax.set_title(titles[fignum - 1])
ax.dist = 12
fignum = fignum + 1
# Plot the ground truth
fig = plt.figure(fignum, figsize=(4, 3))
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=48, azim=134)
for name, label in [('Setosa', 0),
('Versicolour', 1),
('Virginica', 2)]:
ax.text3D(X[y == label, 3].mean(),
X[y == label, 0].mean(),
X[y == label, 2].mean() + 2, name,
horizontalalignment='center',
bbox=dict(alpha=.2, edgecolor='w', facecolor='w'))
# Reorder the labels to have colors matching the cluster results
y = np.choose(y, [1, 2, 0]).astype(np.float)
ax.scatter(X[:, 3], X[:, 0], X[:, 2], c=y, edgecolor='k')
ax.w_xaxis.set_ticklabels([])
ax.w_yaxis.set_ticklabels([])
ax.w_zaxis.set_ticklabels([])
ax.set_xlabel('Petal width')
ax.set_ylabel('Sepal length')
ax.set_zlabel('Petal length')
ax.set_title('Ground Truth')
ax.dist = 12
fig.show()
###Output
_____no_output_____
###Markdown
K-means ClusteringThe plots display firstly what a K-means algorithm would yieldusing three clusters. It is then shown what the effect of a badinitialization is on the classification process:By setting n_init to only 1 (default is 10), the amount oftimes that the algorithm will be run with different centroidseeds is reduced.The next plot displays what using eight clusters would deliverand finally the ground truth.
###Code
print(__doc__)
# Code source: Gaël Varoquaux
# Modified for documentation by Jaques Grobler
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
# Though the following import is not directly being used, it is required
# for 3D projection to work
from mpl_toolkits.mplot3d import Axes3D
from sklearn.cluster import KMeans
from sklearn import datasets
np.random.seed(5)
iris = datasets.load_iris()
X = iris.data
y = iris.target
estimators = [('k_means_iris_8', KMeans(n_clusters=8)),
('k_means_iris_3', KMeans(n_clusters=3)),
('k_means_iris_bad_init', KMeans(n_clusters=3, n_init=1,
init='random'))]
fignum = 1
titles = ['8 clusters', '3 clusters', '3 clusters, bad initialization']
for name, est in estimators:
fig = plt.figure(fignum, figsize=(4, 3))
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=48, azim=134)
est.fit(X)
labels = est.labels_
ax.scatter(X[:, 3], X[:, 0], X[:, 2],
c=labels.astype(np.float), edgecolor='k')
ax.w_xaxis.set_ticklabels([])
ax.w_yaxis.set_ticklabels([])
ax.w_zaxis.set_ticklabels([])
ax.set_xlabel('Petal width')
ax.set_ylabel('Sepal length')
ax.set_zlabel('Petal length')
ax.set_title(titles[fignum - 1])
ax.dist = 12
fignum = fignum + 1
# Plot the ground truth
fig = plt.figure(fignum, figsize=(4, 3))
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=48, azim=134)
for name, label in [('Setosa', 0),
('Versicolour', 1),
('Virginica', 2)]:
ax.text3D(X[y == label, 3].mean(),
X[y == label, 0].mean(),
X[y == label, 2].mean() + 2, name,
horizontalalignment='center',
bbox=dict(alpha=.2, edgecolor='w', facecolor='w'))
# Reorder the labels to have colors matching the cluster results
y = np.choose(y, [1, 2, 0]).astype(np.float)
ax.scatter(X[:, 3], X[:, 0], X[:, 2], c=y, edgecolor='k')
ax.w_xaxis.set_ticklabels([])
ax.w_yaxis.set_ticklabels([])
ax.w_zaxis.set_ticklabels([])
ax.set_xlabel('Petal width')
ax.set_ylabel('Sepal length')
ax.set_zlabel('Petal length')
ax.set_title('Ground Truth')
ax.dist = 12
fig.show()
###Output
_____no_output_____ |
notebooks/ImageLighting&Denoising.ipynb | ###Markdown
Median filtering
###Code
img = cv2.imread("myImage.jpg")
# img2 = cv2.medianBlur(img,5)
# compare = np.concatenate((img,img2),axis=1)
# cv2.imshow('img',compare)
# cv2.waitKey(0)
# cv2.destroyAllWindows
###Output
_____no_output_____
###Markdown
CLAHE contrast improvement.Trying median filtering after CLAHE=> No observable effect on myImage.jpg; need more images to test.
###Code
# gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# lab = cv2.cvtColor(img,cv2.COLOR_BGR2LAB)
# l,a,b = cv2.split(lab)
# # clahe = cv2.createCLAHE(clipLimit=3.0, tileGridSize=(8,8))
# # cl = clahe.apply(l)
# # limg = cv2.merge((cl,a,b))
# # final = cv2.cvtColor(limg, cv2.COLOR_LAB2BGR)
# #Try median filtering to remove noise
# final = cv2.medianBlur(lab,5)
# invert_L = cv2.bitwise_not(final) #invert lightness
# composed = cv2.addWeighted(gray, 0.75, invert_L, 0.25, 0)
def light_removing(img) :
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
L = lab[:,:,0]
med_L = cv2.medianBlur(L,5) #median filter
invert_L = cv2.bitwise_not(med_L) #invert lightness
composed = cv2.addWeighted(gray, 0.75, invert_L, 0.25, 0)
return composed
compare = np.concatenate((img, light_removing(img)),axis=1)
cv2.imshow('img',compare)
cv2.waitKey(0)
cv2.destroyWindow('img')
###Output
_____no_output_____ |
prediction/pred_data-driven_lr.ipynb | ###Markdown
IntroductionIn this notebook, we'll evaluate classifiers performing forward and reverse inference using data from neuroimaging articles. *Forward inference* classifiers predict which brain structures were reported in activation coordinate data using the mental functions discussed in article texts. *Reverse inference* classifiers use the same data but flip the inputs and labels, predicting the mental functions in article texts from brain structures in the coordinate data. Classifiers were trained on 12,708 articles, tuned on a validation set of 3,603 articles, and will be evaluated on held-out test set of 1,816 articles. Classifiers are multilayer neural networks implemented in PyTorch. All classifiers were trained with ReLU activation functions, 8 layers, and the Adam solver over 500 iterations. The learning rate, weight decay, and number of units per hidden layer were selected based on validation set ROC-AUC.Evaluation metrics include the following:1. **ROC-AUC**, which captures the trade-off between true positive rate (TPR) and false positive rate (FPR).2. **F1 score**, which captures the trade-off between precision and recall (the latter of which is equivalent to the TPR).
###Code
import pandas as pd
import numpy as np
np.random.seed(42)
import sys
sys.path.append("..")
import utilities, evaluation
%matplotlib inline
framework = "data-driven"
suffix = "" # Suffix for term lists
clf = "_lr" # Classification by logistic regression
n_iter = 1000 # Iterations for bootstrap and null distributions
alpha = 0.001 # Significance levels for plotting
dtm_version = 190325 # Version of the document-term matrix
###Output
_____no_output_____
###Markdown
Train the classifiers
###Code
from logistic_regression import prediction
prediction.train_classifier(framework, "forward", clf=clf, dtm_version=dtm_version,
in_path="", out_path="logistic_regression/")
prediction.train_classifier(framework, "reverse", clf=clf, dtm_version=dtm_version,
in_path="", out_path="logistic_regression/")
###Output
_____no_output_____
###Markdown
Load data for evaluation Brain activation coordinates
###Code
act_bin = utilities.load_coordinates()
print("Document N={}, Structure N={}".format(
act_bin.shape[0], act_bin.shape[1]))
###Output
Document N=18155, Structure N=118
###Markdown
Document-term matrix
###Code
dtm_bin = utilities.load_doc_term_matrix(version=190325, binarize=True)
print("Document N={}, Term N={}".format(
dtm_bin.shape[0], dtm_bin.shape[1]))
###Output
Document N=18155, Term N=4107
###Markdown
Framework contents
###Code
lists, circuits = utilities.load_framework(framework, suffix=suffix, clf=clf)
###Output
_____no_output_____
###Markdown
Term list scores
###Code
scores = utilities.score_lists(lists, dtm_bin)
###Output
_____no_output_____
###Markdown
Load classifier fits
###Code
import pickle
from sklearn.linear_model import LogisticRegression
from sklearn.multiclass import OneVsRestClassifier
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import ParameterSampler
directions = ["forward", "reverse"]
fit = {}
for direction in directions:
filename = "logistic_regression/fits/{}_{}.p".format(framework, direction)
fit[direction] = pickle.load(open(filename, 'rb'))
print("-"*50 + "\n{} INFERENCE CLASSIFIER\n".format(direction.upper()) + "-"*50)
print(fit[direction])
print("")
###Output
--------------------------------------------------
FORWARD INFERENCE CLASSIFIER
--------------------------------------------------
OneVsRestClassifier(estimator=LogisticRegression(C=1, class_weight=None,
dual=False, fit_intercept=True,
intercept_scaling=1,
l1_ratio=None, max_iter=1000,
multi_class='warn',
n_jobs=None, penalty='l1',
random_state=42,
solver='liblinear', tol=1e-10,
verbose=0, warm_start=False),
n_jobs=None)
--------------------------------------------------
REVERSE INFERENCE CLASSIFIER
--------------------------------------------------
OneVsRestClassifier(estimator=LogisticRegression(C=0.1, class_weight=None,
dual=False, fit_intercept=True,
intercept_scaling=1,
l1_ratio=None, max_iter=1000,
multi_class='warn',
n_jobs=None, penalty='l1',
random_state=42,
solver='liblinear', tol=1e-10,
verbose=0, warm_start=False),
n_jobs=None)
###Markdown
Load the test set
###Code
test = [int(pmid.strip()) for pmid in open("../data/splits/test.txt")]
m = len(test)
print("Test N={}".format(m))
###Output
Test N=1816
###Markdown
Load the palette
###Code
from style import style
palette = {"forward": [],
"reverse": style.palettes[framework]}
domains = list(circuits.columns)
print(domains)
for structure in act_bin.columns:
dom_idx = np.argmax(circuits.loc[structure].values)
color = palette["reverse"][dom_idx]
palette["forward"].append(color)
###Output
_____no_output_____
###Markdown
Plot ROC and PR curves Forward inference
###Code
d = "forward"
pred_probs = fit[d].predict_proba(scores.loc[test])
labels = act_bin.loc[test].values
###Output
_____no_output_____
###Markdown
ROC curves
###Code
fpr, tpr = evaluation.compute_roc(labels, pred_probs)
evaluation.plot_curves("roc", framework, d, fpr, tpr, palette[d],
opacity=0.4, path="logistic_regression/")
###Output
_____no_output_____
###Markdown
PR curves
###Code
precision, recall = evaluation.compute_prc(labels, pred_probs)
evaluation.plot_curves("prc", framework, d, recall, precision, palette[d],
diag=False, opacity=0.4, path="logistic_regression/")
###Output
_____no_output_____
###Markdown
Reverse inference
###Code
d = "reverse"
pred_probs = fit[d].predict_proba(act_bin.loc[test].values)
labels = scores.loc[test].values
###Output
_____no_output_____
###Markdown
ROC curves
###Code
fpr, tpr = evaluation.compute_roc(labels, pred_probs)
evaluation.plot_curves("roc", framework, d, fpr, tpr, palette[d],
opacity=0.65, path="logistic_regression/")
###Output
_____no_output_____
###Markdown
PR curves
###Code
precision, recall = evaluation.compute_prc(labels, pred_probs)
evaluation.plot_curves("prc", framework, d, recall, precision, palette[d],
diag=False, opacity=0.65, path="logistic_regression/")
###Output
_____no_output_____
###Markdown
Compute evaluation metrics Observed values
###Code
from sklearn.metrics import roc_auc_score, f1_score
X = {"forward": scores.loc[test].values, "reverse": act_bin.loc[test].values}
Y = {"forward": act_bin.loc[test].values, "reverse": scores.loc[test].values}
pred_probs = {d: fit[d].predict_proba(X[d]) for d in directions}
preds = {d: 1 * (pred_probs[d] > 0.5) for d in directions}
obs = {d: {} for d in directions}
for d in directions:
obs[d]["rocauc"] = evaluation.compute_eval_metric(Y[d], pred_probs[d], roc_auc_score)
obs[d]["f1"] = evaluation.compute_eval_metric(Y[d], preds[d], f1_score)
###Output
_____no_output_____
###Markdown
Bootstrap distributions
###Code
import os
boot = {d: {} for d in directions}
for d in directions:
print("{}".format(d.title()))
boot[d]["rocauc"] = np.empty((len(obs[d]["rocauc"]), n_iter))
boot[d]["f1"] = np.empty((len(obs[d]["f1"]), n_iter))
rocauc_file = "logistic_regression/data/rocauc_boot_{}_{}_{}iter.csv".format(framework, d, n_iter)
if os.path.isfile(rocauc_file):
boot[d]["rocauc"] = pd.read_csv(rocauc_file, index_col=0, header=0).values
print("\tLoaded ROC-AUC from file")
else:
print("ROC-AUC")
for n in range(n_iter):
samp = np.random.choice(range(m), size=m, replace=True)
boot[d]["rocauc"][:,n] = evaluation.compute_eval_metric(Y[d][samp,:], pred_probs[d][samp,:], roc_auc_score)
if n % (n_iter/10) == 0:
print("\tIteration {}".format(n))
f1_file = "logistic_regression/data/f1_boot_{}_{}_{}iter.csv".format(framework, d, n_iter)
if os.path.isfile(f1_file):
boot[d]["f1"] = pd.read_csv(f1_file, index_col=0, header=0).values
print("\tLoaded F1 from file")
else:
print("F1")
for n in range(n_iter):
samp = np.random.choice(range(m), size=m, replace=True)
boot[d]["f1"][:,n] = evaluation.compute_eval_metric(Y[d][samp,:], preds[d][samp,:], f1_score)
if n % (n_iter/10) == 0:
print("\tIteration {}".format(n))
print("")
###Output
Forward
Loaded ROC-AUC from file
Loaded F1 from file
Reverse
Loaded ROC-AUC from file
Loaded F1 from file
###Markdown
Null distributions
###Code
null = {d: {} for d in directions}
for d in directions:
print("{}".format(d.title()))
null[d]["rocauc"] = np.empty((len(obs[d]["rocauc"]), n_iter))
null[d]["f1"] = np.empty((len(obs[d]["f1"]), n_iter))
rocauc_file = "logistic_regression/data/rocauc_null_{}_{}_{}iter.csv".format(framework, d, n_iter)
if os.path.isfile(rocauc_file):
null[d]["rocauc"] = pd.read_csv(rocauc_file, index_col=0, header=0).values
print("\tLoaded ROC-AUC from file")
else:
print("ROC-AUC")
for n in range(n_iter):
shuf = np.random.choice(range(m), size=m, replace=False)
null[d]["rocauc"][:,n] = evaluation.compute_eval_metric(Y[d][shuf,:], pred_probs[d], roc_auc_score)
if n % (n_iter/10) == 0:
print("\tIteration {}".format(n))
f1_file = "logistic_regression/data/f1_null_{}_{}_{}iter.csv".format(framework, d, n_iter)
if os.path.isfile(f1_file):
null[d]["f1"] = pd.read_csv(f1_file, index_col=0, header=0).values
print("\tLoaded F1 from file")
else:
print("F1")
for n in range(n_iter):
shuf = np.random.choice(range(m), size=m, replace=False)
null[d]["f1"][:,n] = evaluation.compute_eval_metric(Y[d][shuf,:], preds[d], f1_score)
if n % (n_iter/10) == 0:
print("\tIteration {}".format(n))
print("")
###Output
Forward
Loaded ROC-AUC from file
Loaded F1 from file
Reverse
Loaded ROC-AUC from file
Loaded F1 from file
###Markdown
Null confidence intervals
###Code
interval = 0.999
idx_lower = int((1.0-interval)*n_iter)
idx_upper = int(interval*n_iter)
metric_labels = ["rocauc", "f1"]
null_ci = {d: {} for d in directions}
for metric in metric_labels:
for d in directions:
dist = null[d][metric]
n_clf = dist.shape[0]
null_ci[d][metric] = {}
null_ci[d][metric]["lower"] = [sorted(dist[i,:])[idx_lower] for i in range(n_clf)]
null_ci[d][metric]["upper"] = [sorted(dist[i,:])[idx_upper] for i in range(n_clf)]
null_ci[d][metric]["mean"] = [np.mean(dist[i,:]) for i in range(n_clf)]
###Output
_____no_output_____
###Markdown
Perform hypothesis testing
###Code
from statsmodels.stats.multitest import multipletests
p = {d: {} for d in directions}
for metric in metric_labels:
for d in directions:
dist = null[d][metric]
n_clf = dist.shape[0]
p[d][metric] = [np.sum(dist[i,:] >= obs[d][metric][i]) / float(n_iter) for i in range(n_clf)]
fdr = {d: {} for d in directions}
for metric in metric_labels:
for d in directions:
fdr[d][metric] = multipletests(p[d][metric], method="fdr_bh")[1]
###Output
_____no_output_____
###Markdown
Plot evaluation metrics Forward inference
###Code
struct_labels = pd.read_csv("../data/brain/labels.csv", index_col=None)
struct_labels.index = struct_labels["PREPROCESSED"]
struct_labels = struct_labels.loc[act_bin.columns, "ABBREVIATION"].values
d = "forward"
metric = "rocauc"
evaluation.plot_eval_metric(metric, framework, d, obs[d][metric],
boot[d][metric], null_ci[d][metric], fdr[d][metric],
palette[d], labels=struct_labels, dx=0.375, dxs=0.55,
figsize=(13, 3.2), ylim=[0.4, 0.8], alphas=[alpha], path="logistic_regression/")
metric = "f1"
evaluation.plot_eval_metric(metric, framework, d, obs[d][metric],
boot[d][metric], null_ci[d][metric], fdr[d][metric],
palette[d], labels=struct_labels, dx=0.375, dxs=0.55,
figsize=(13, 3.2), ylim=[0.3, 0.7], alphas=[alpha], path="logistic_regression/")
###Output
_____no_output_____
###Markdown
Reverse inference
###Code
d = "reverse"
metric = "rocauc"
evaluation.plot_eval_metric(metric, framework, d, obs[d][metric],
boot[d][metric], null_ci[d][metric], fdr[d][metric],
palette[d], labels=[], dx=0.375, dxs=0.11,
figsize=(3.6, 3.2), ylim=[0.4, 0.8], alphas=[alpha], path="logistic_regression/")
metric = "f1"
evaluation.plot_eval_metric(metric, framework, d, obs[d][metric],
boot[d][metric], null_ci[d][metric], fdr[d][metric],
palette[d], labels=[], dx=0.375, dxs=0.11,
figsize=(3.6, 3.2), ylim=[0.3, 0.7], alphas=[alpha], path="logistic_regression/")
###Output
_____no_output_____
###Markdown
Export metric distributions
###Code
labels = {"forward": act_bin.columns, "reverse": domains}
for metric in metric_labels:
for d in directions:
for dist, dic in zip(["boot", "null"], [boot, null]):
df = pd.DataFrame(dic[d][metric],
index=labels[d], columns=range(n_iter))
df.to_csv("logistic_regression/data/{}_{}_{}_{}_{}iter.csv".format(
metric, dist, framework, d, n_iter))
obs_df = pd.Series(obs[d][metric], index=labels[d])
obs_df.to_csv("logistic_regression/data/{}_obs_{}_{}.csv".format(metric, framework, d))
###Output
/anaconda3/envs/ontol/lib/python3.6/site-packages/ipykernel_launcher.py:9: FutureWarning: The signature of `Series.to_csv` was aligned to that of `DataFrame.to_csv`, and argument 'header' will change its default value from False to True: please pass an explicit value to suppress this warning.
if __name__ == '__main__':
###Markdown
Compare to neural networks Load neural network fits
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import torch.optim as optim
torch.manual_seed(42)
from neural_network.prediction import Net
opt_epochs = 500 # Epochs used to optimize the classifier hyperparameters
train_epochs = 1000 # Epochs used to train the classifier
fit_nn = {}
for direction in directions:
hyperparams = pd.read_csv("neural_network/data/params_{}_{}_{}epochs.csv".format(framework, direction, opt_epochs), header=None, index_col=0)
h = {str(label): float(value) for label, value in hyperparams.iterrows()}
state_dict = torch.load("neural_network/fits/{}_{}_{}epochs.pt".format(framework, direction, train_epochs))
layers = list(state_dict.keys())
n_input = state_dict[layers[0]].shape[1]
n_output = state_dict[layers[-2]].shape[0]
fit_nn[direction] = Net(n_input=n_input, n_output=n_output,
n_hid=int(h["n_hid"]), p_dropout=h["p_dropout"])
fit_nn[direction].load_state_dict(state_dict)
print("-"*50 + "\n{} INFERENCE CLASSIFIER\n".format(direction.upper()) + "-"*50)
print(fit_nn[direction])
print("")
###Output
--------------------------------------------------
FORWARD INFERENCE CLASSIFIER
--------------------------------------------------
Net(
(fc1): Linear(in_features=6, out_features=125, bias=True)
(bn1): BatchNorm1d(125, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(dropout1): Dropout(p=0.1)
(fc2): Linear(in_features=125, out_features=125, bias=True)
(bn2): BatchNorm1d(125, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(dropout2): Dropout(p=0.1)
(fc3): Linear(in_features=125, out_features=125, bias=True)
(bn3): BatchNorm1d(125, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(dropout3): Dropout(p=0.1)
(fc4): Linear(in_features=125, out_features=125, bias=True)
(bn4): BatchNorm1d(125, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(dropout4): Dropout(p=0.1)
(fc5): Linear(in_features=125, out_features=125, bias=True)
(bn5): BatchNorm1d(125, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(dropout5): Dropout(p=0.1)
(fc6): Linear(in_features=125, out_features=125, bias=True)
(bn6): BatchNorm1d(125, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(dropout6): Dropout(p=0.1)
(fc7): Linear(in_features=125, out_features=125, bias=True)
(bn7): BatchNorm1d(125, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(dropout7): Dropout(p=0.1)
(fc8): Linear(in_features=125, out_features=118, bias=True)
)
--------------------------------------------------
REVERSE INFERENCE CLASSIFIER
--------------------------------------------------
Net(
(fc1): Linear(in_features=118, out_features=100, bias=True)
(bn1): BatchNorm1d(100, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(dropout1): Dropout(p=0.3)
(fc2): Linear(in_features=100, out_features=100, bias=True)
(bn2): BatchNorm1d(100, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(dropout2): Dropout(p=0.3)
(fc3): Linear(in_features=100, out_features=100, bias=True)
(bn3): BatchNorm1d(100, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(dropout3): Dropout(p=0.3)
(fc4): Linear(in_features=100, out_features=100, bias=True)
(bn4): BatchNorm1d(100, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(dropout4): Dropout(p=0.3)
(fc5): Linear(in_features=100, out_features=100, bias=True)
(bn5): BatchNorm1d(100, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(dropout5): Dropout(p=0.3)
(fc6): Linear(in_features=100, out_features=100, bias=True)
(bn6): BatchNorm1d(100, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(dropout6): Dropout(p=0.3)
(fc7): Linear(in_features=100, out_features=100, bias=True)
(bn7): BatchNorm1d(100, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(dropout7): Dropout(p=0.3)
(fc8): Linear(in_features=100, out_features=6, bias=True)
)
###Markdown
Load neural network evaluation data
###Code
boot_nn = {d: {} for d in directions}
for d in directions:
boot_nn[d]["rocauc"] = pd.read_csv("neural_network/data/rocauc_boot_{}_{}_{}iter.csv".format(framework, d, n_iter), index_col=0, header=0).values
boot_nn[d]["f1"] = pd.read_csv("neural_network/data/f1_boot_{}_{}_{}iter.csv".format(framework, d, n_iter), index_col=0, header=0).values
###Output
_____no_output_____
###Markdown
Export classifier comparison
###Code
lower_i = int(0.001 * n_iter)
upper_i = int(0.999 * n_iter)
for metric in metric_labels:
for d in directions:
dist = boot[d][metric] - boot_nn[d][metric]
dist = [sorted(row) for row in dist]
lower_CI = [row[lower_i] for row in dist]
upper_CI = [row[upper_i] for row in dist]
df = pd.DataFrame({"CI_LOWER": lower_CI, "CI_UPPER": upper_CI})
df.to_csv("data/{}_lr-nn_{}_{}.csv".format(metric, framework, d), index=None)
###Output
_____no_output_____ |
Subsets and Splits